I was chatting with a couple of work colleagues recently, about strong AI vs. weak AI. Strong AI loosely means trying to create a system or a robot that can do pretty much whatever a human can do. Weak AI (also called narrow AI) loosely means creating a system that can do just one task that is normally associated with human abilities.
One of the oldest examples of weak AI is computer chess. (Note: there is a world chess championship match going on right now between current champ Magnus Carlsen and challenger Sergei Karjakin). Other examples of weak AI might be Apple’s Siri, Microsoft’s Cortana, and Amazon’s Alexa.
Most of my colleagues think that research should focus on strong AI. I hold the minority view that we are decades away from having enough raw computing power to get anywhere close to strong AI. My view is that work on strong AI may lead to a few unexpected breakthroughs, but a more efficient approach is to focus on weak AI problems and let strong AI more or less take care of itself.
Put slightly differently, I enjoy working on a specific problem, such as predicting NFL football scores (as in my Zoltar system) rather than general problems or pure research.