Geckobot.com originated 20 years ago as an “information search engine," but today it serves as a showcase for consulting and research projects in the realms ot cognitive and data sciences.

The story begins with Twister, a remarkable tool that aids in model development. Then we will embark on a journey through a series of models that investigate how humans process information, beginning with perception and ending with high-level cognition.

"One of the first rules of science is if somebody delivers a secret weapon to you, you better use it."

Herbert A. Simon

Introducting Twister: A Secret Weapon

Every model has adjustable parameters that can make or break its viability.  For example, a neural network might have a parameter that adjusts how aggressively it learns— a value too small won’t learn the patterns in the data, and a value too large will overlearn and not generalize well. Finding the right values is difficult and computationally expensive. Twister’s new approach makes sure models are the best they can be.

Watch Twister zoom into the best combination of four parameters in the test model— try not to blink!

Twister reduces hundreds of thousands of calculations to hundreds.

  • Reduces time and equipment costs

  • Faster turnaround for model development

  • Greener for the planet!

Cognitive Science or Data Science?

Cognitive science models learn and make mistakes like people, thus, they can be used to predict interactions with a device or web site, or answer questions about how and when to provide training. They can also be useful for situations where training data is streamed over time, as they can adapt to changing situations without maintenance.

Data science models tend to be used in commercial applications, where mistakes are undesirable.  Examples include self-driving cars, fraud detection, and recommendation engines. The tools that data scientists use to make predictions are very flexible and can be applied to just about any data you might have lying around.

Many of the models you will see here are unique in that they straddle both fields, using research in cognitive science to drive new tools for data science. Human-like qualities are configurable, and Twister can be used to decide how the model should be best tuned for a particular application.

Early Perception: Machine Learning to Hear

Modern speech recognition software relies on a plethora of engineering techniques. The human perceptual system does not usually inform these approaches, perhaps because it is so complex and poorly understood.  Still, wouldn’t it be nice if your phone could understand you as well as other people?

Instead of focusing on the mature perceptual system, this model focuses on the process of learning to perceive.  Humans learn with neurons, neurons are just cells, and cells aren’t very smart.  Therefore, the biological response to complex perceptual signals must be tractable.

Testing the theory is straightforward: If the model extracts the same features that people do, then we should be able to reconstruct the original signal from those features, and it will sound the same to us.

In the audio clip above, the first word is the original signal fed to the model, and the second a reproduction from the model.  Do they sound the same to you?

The perceptual model produces stunning visualizations to aid with investigations. 

Implicit Learning: The Road to Cognition

Much like music is composed of a series of notes, the perceptual model above predicts that complex signals will be broken down into a large number of lower-frequency events. This is the bridge between perception and high level cognition, where implicit learning occurs.

In data science parlance, this is analogous to unsupervised learning, which usually implies clustering. Here, the models use human learning dynamics to derive a new kind of clustering techology.

As the "clustering shaper” model absorbs streams of news stories, it discovers common English words and phrases with 75% precision. Can you tell when these news stories originally broke, based on what the machine learned?

Alpha: First Steps Towards Cognition

Alpha is a model that takes some first steps towards simulating high-level cognition. Unlike most models, Alpha is not preprogrammed to perform any specific task. Instead, it is a generalized model that combines perception, implicit learning, working memory, and supervised learning to determine what to do and optimize its behavior.

The model doesn’t compute a single statistic, yet complex Bayesian probability calculations emerge, just as we see in human cognition.

Alpha learns to respond, “Hello,” when so greeted.


Microsoft’s Cortana may one day use a cognitively-inspired algorithm to recognize user behavior patterns (patent pending).

This spot reserved for your application.

Sleep-O-Meter tracks the user’s sleeping patterns and uses a biomathematical model to report levels of fatigue.

You’re Still Here?

Thanks for stopping by and looking over my work.

If you have any questions feel free to contact me.

Rick Moore