Long Arrow Right External Link angle-right Search Times Spinner angle-left

How do you signal process data?

We process as many signals in as many modalities as we can. We have an ensemble of signal processors in tensor flow, external APIs, Wolfram Lang/Wolfram Cloud and more. 

In general we try to use processors that match the medium of the signal but obviously almost everything in the universe can be decomposed (or composed) as wave forms — hence why there is such strong overlap between information theory and regular old physics and why Fast Fourier Transforms are found all over and Bayesian ideas are so useful in quantum mechanics and in economics and so on.

  • A signal is any data point that can be measured. An audio file has hundreds of signals to measure. From raw signals such as length, time, file size, pitch, frequency, loudness, to more linguistic ones like vocabulary, complexity, entities, and sentiment.
  • We develop algorithms that decompose, or break apart, those signals to convey information. This information is “probably approximately correct” meaning we aren’t so much concerned about 100% accuracy (remember: it is hard for even a human to be aware and accurate of observing their own thoughts and emotions) but gets us in the neighborhood to infer. Our job isn’t to predict, it is to convey enough information for a person to ask better questions.
  • For example, energy is a mirror of the users physiological and mental fitness. It is measured by the change in speed in voice (slow vs fast), frequency (high or low), and volume (loud or soft). Think of this in human terms… if a person is talking fast, loud, and high, compared to a previous interaction, we can assume they have high energy. Or if they are talking slow, low, and soft we can assume they are low on energy.