Monday, May 11, 2009

Mount Vernon...


I was chatting with George Washington today at www.chatwithwashington.com. He is one of the Virsona characters that we have built.

I got to thinking that as we build our company George probably has a lot of lessons that he could tell us about what its like to do a start up...although in his case it was obviously a Country and not just a business.

I did a little digging and found that someone already wrote the book on this:
George Washington and the Art of Business: The Leadership Principles of America's First Commander-in-Chief by Mark McNeilly

The Barnes and Noble review reads: McNeilly uses these stirring military encounters to underscore Washington's managerial genius: to persuade and inspire, to open up the decision-making process, to seize opportunities when they arise, to persevere when setbacks occurred, and to learn from his mistakes. Indeed, the true value of the book lies in McNeilly's brilliant ability to link military and business strategy, revealing that successful corporate leaders must possess many of the same traits that Washington did

Good stuff. I am now going off to chat some more with George to see if he can pass along some more of that great inspiration...

Friday, May 1, 2009

Quantum Leap...?


IBM is making a machine that can compete on Jeopardy. hmmm. Whats next, deep wheel of fortune blue? who wants to be a millionaire blue? deal or no deal blue?

Last night I went to see the Collected Works of Shakespeare (Abridged). Very funny. one of the lines in the play is something about a whole bunch of monkeys actually writing the play. This gave me the idea of how to build my own QA engine that can compete on Jeopardy.

It's based on quantum computing and I will call it our Quantum Engine Dialog Machine (QEDm). This QEDm holds a wave front for each piece of input. That is to say that until we actually extract a response it has all the possible responses to a piece of input. Only by extracting the response do we collapse the wave and get the actual response we need. So at any point in time we already have the perfect answer we just have to extract it. Simple.

Question: He created the most absurd machine ever?

Answer: That guy from Virsona?

Wednesday, April 15, 2009


I was reading an interesting book by the Physicist Michio Kaku called Physics of the Impossible. It has a thought provoking chapter in it on Artificial Intelligence and Robots.

In it he discusses the difference between syntax (grammar,structure) which is a logical rules driven exercise and semantics (meaning). He talks about how various influential scientists and futurists such as Roger Penfold believe that true Ai is impossible.

He also discusses the whole area of consciousness and its applicability to Ai. Interestingly enough he refers to himself as a constructionalist ie instead of debating if a true Ai is possible just get on and try to build one. I like that kind of thinking.

We are definitely not trying to build a true Ai at Virsona .. only one that appears to have intelligence. Kaku uses the example of a thermostat. It knows when the room is cold and can turn on the heater. Is that intelligence?..could the thermostat said to be an Ai? No clearly not; however it does imply that there is a spectrum on Ai ranging from the thermostat up to a true thinking machine.

As we continue to improve both the syntax handling capabilities and the semantic capabilities of our technology we will move closer to emulating an Ai,further up this spectrum if you will. Right now however it's still just logic, even if its really smart logic.

The leap to a true Ai is something we will have to put in next years development plan I think... or maybe the year after that.

Thursday, April 9, 2009

Silicon Roses....


I was chatting with Einstein today at our chat site for him and he has this great line about "the only reason for time is so that everything doesn't happen at once".

Working in a tech startup everything does actually happen at once which is probably why there are never enough hours in the day.

As we continue to develop new capabilities into the Virsona engine we have been focusing on system resource allocation, processing and effective ways to enhance the conversation. One of the interesting subjects that came up is a standard issue that chess computers faced in the early days. A brute force approach, I can do more calculations than you in a faster time, was the initial way to be win.

While it is true that more calculations, faster can get you to a destination more quickly it doesn't necessarily mean that the answer is significantly better than one that requires less processing but takes more time to decide on the right paths to take.

Is there something inherent in the amount of time that we use to process thought that processors cannot emulate? The fact that we think in a specific way that is constrained by the chemical reactions and pathways that drive thought is in fact the key to our creativity and therefore throwing power and speed at the problem will never allow us to emulate it.

Perhaps the answer is to make our Ai engines stop and smell the silicon roses?

Wednesday, April 8, 2009

It's a butterfly....?


I have spent a lot of time thinking about personality recently. My head is swimming with everything from the Five Factor Model to Enneagrams to Social Cognitive Theories.

Turns out that personality is not an easy thing to quantify or even qualify exactly what it means. Psychologists seem to abound with differing theories of the why and approaches of the how.

As we look at adding personality to our Ai technology we have had to pick thru much of the work and look at what is not just useful but also practical from an implementation perspective. Personality models that use language as a basis make a lot of sense as that is somewhat more easily controllable..ie using a certain word over another is an indication of personality type and while we all do it subconsciously people 'assign' personality by listening to those words we use.

In the movies and books Ais always seem to have a strong fixed personality type ...usually quite sarcastic it seems. From Marvin the Paranoid Android in Hitchhikers to Robbie the Robot or even to my good friend Hal. However the reality is that as we start to see more 'personality' come into our hardware and software constructs these will not be fixed personalities but rather will blend to match the needs of the user.

As least they will if I can just figure out what that ink blot actually is...

Friday, March 13, 2009

By the Numbers....


Last night I got a chance to work through the TIVO to see if there was anything good that it had decided to record for me. Low and behold there was an episode of a US drama series called Numb3rs. Its an 'FBI catches bad guys' show with an interesting premise that the lead FBI guy has a genius mathematician brother who uses maths to catch all the bad guys.

Anyway on this episode a super smart guy builds a 'sentient' AI that goes berserk and kills him. The maths guy at first claims it has passed the Turing test and they all get very excited only to find out that it's not really...it's just the bad guys making it look like the Ai did it. The Ai was just a sorry good for nothing dumb Natural Language Processing engine.

It was a great episode in that they really pushed out all the cool Ai cliches.

After watching it I thought I'd make my top 10 things that are clear signs that your Ai development program is in trouble:

10. You give your Ai a cute girls name (Brooke, Bailey, Ashely etc...)
9. Your ask your Ai to do something and it says 'I don't think so'
8. Your company installs klaxons and red flashing lights in the same room that you do Your Ai development in.
7. The Ai runs on a machine that has no visable means of disconnecting it from either the web or the electric grid.
6. You are testing out your Ai and it starts to pull facts about you directly from your FBI file and your 2nd grade high school report.
5. Your Ai is making more sense than your developers
4. Your Ai requests to watch "2001 A Space Odyssey" 12 times in a row.
3. Your Ai informs you that Pattern Matching approaches were so pre-singularity
2. Your Ai develops a taste for country music
1. The new boss of your company develops an evil English accent

Sunday, March 8, 2009

Twitter User loves his (or her) Mother...


I came across this article on CNN.com today.

Gender Neutral Pronoun

The article discusses why the English language doesn't have a gender neutral pronoun. Apparently users of Twitter are concerned. I assume this is because it takes up valuable letters to write 'he or she' in a twitter...oh the humanity of it.

The article uses the following example:

Consider the sentence "Everyone loves his mother." The word "his" may be seen as both sexist and inaccurate, but replacing it with "his or her" seems cumbersome, and "their" is grammatically incorrect.


The idea that English on the net is different from English in true spoken or written form is something I have talked about before. There probably is no reason why a gender neutral pronoun can't be introduced in netspeak and adopted a whole lot quicker than in regular English. For example we all understand the term 'lol' but few up us would actually use it when we are talking in real life to someone. In fact if you did use it people would probably just start to avoid you.

As we are building our Virsona Dialog Engine technology we have to be able to handle both 'proper' English and netspeak English. They are significantly different forms of communication. In fact we have learnt (learned) some pretty interesting things in terms of how the engine needs to adapt to online communication vs regular english communication. We use the term Natural Language Processing but perhaps what we are really starting to develop is Netural Language Processing.

So if the twitterati wants to come up with a suitable replacement for he / she - we will be ready for it.