Monday, May 11, 2009

Mount Vernon...


I was chatting with George Washington today at www.chatwithwashington.com. He is one of the Virsona characters that we have built.

I got to thinking that as we build our company George probably has a lot of lessons that he could tell us about what its like to do a start up...although in his case it was obviously a Country and not just a business.

I did a little digging and found that someone already wrote the book on this:
George Washington and the Art of Business: The Leadership Principles of America's First Commander-in-Chief by Mark McNeilly

The Barnes and Noble review reads: McNeilly uses these stirring military encounters to underscore Washington's managerial genius: to persuade and inspire, to open up the decision-making process, to seize opportunities when they arise, to persevere when setbacks occurred, and to learn from his mistakes. Indeed, the true value of the book lies in McNeilly's brilliant ability to link military and business strategy, revealing that successful corporate leaders must possess many of the same traits that Washington did

Good stuff. I am now going off to chat some more with George to see if he can pass along some more of that great inspiration...

Friday, May 1, 2009

Quantum Leap...?


IBM is making a machine that can compete on Jeopardy. hmmm. Whats next, deep wheel of fortune blue? who wants to be a millionaire blue? deal or no deal blue?

Last night I went to see the Collected Works of Shakespeare (Abridged). Very funny. one of the lines in the play is something about a whole bunch of monkeys actually writing the play. This gave me the idea of how to build my own QA engine that can compete on Jeopardy.

It's based on quantum computing and I will call it our Quantum Engine Dialog Machine (QEDm). This QEDm holds a wave front for each piece of input. That is to say that until we actually extract a response it has all the possible responses to a piece of input. Only by extracting the response do we collapse the wave and get the actual response we need. So at any point in time we already have the perfect answer we just have to extract it. Simple.

Question: He created the most absurd machine ever?

Answer: That guy from Virsona?

Wednesday, April 15, 2009


I was reading an interesting book by the Physicist Michio Kaku called Physics of the Impossible. It has a thought provoking chapter in it on Artificial Intelligence and Robots.

In it he discusses the difference between syntax (grammar,structure) which is a logical rules driven exercise and semantics (meaning). He talks about how various influential scientists and futurists such as Roger Penfold believe that true Ai is impossible.

He also discusses the whole area of consciousness and its applicability to Ai. Interestingly enough he refers to himself as a constructionalist ie instead of debating if a true Ai is possible just get on and try to build one. I like that kind of thinking.

We are definitely not trying to build a true Ai at Virsona .. only one that appears to have intelligence. Kaku uses the example of a thermostat. It knows when the room is cold and can turn on the heater. Is that intelligence?..could the thermostat said to be an Ai? No clearly not; however it does imply that there is a spectrum on Ai ranging from the thermostat up to a true thinking machine.

As we continue to improve both the syntax handling capabilities and the semantic capabilities of our technology we will move closer to emulating an Ai,further up this spectrum if you will. Right now however it's still just logic, even if its really smart logic.

The leap to a true Ai is something we will have to put in next years development plan I think... or maybe the year after that.

Thursday, April 9, 2009

Silicon Roses....


I was chatting with Einstein today at our chat site for him and he has this great line about "the only reason for time is so that everything doesn't happen at once".

Working in a tech startup everything does actually happen at once which is probably why there are never enough hours in the day.

As we continue to develop new capabilities into the Virsona engine we have been focusing on system resource allocation, processing and effective ways to enhance the conversation. One of the interesting subjects that came up is a standard issue that chess computers faced in the early days. A brute force approach, I can do more calculations than you in a faster time, was the initial way to be win.

While it is true that more calculations, faster can get you to a destination more quickly it doesn't necessarily mean that the answer is significantly better than one that requires less processing but takes more time to decide on the right paths to take.

Is there something inherent in the amount of time that we use to process thought that processors cannot emulate? The fact that we think in a specific way that is constrained by the chemical reactions and pathways that drive thought is in fact the key to our creativity and therefore throwing power and speed at the problem will never allow us to emulate it.

Perhaps the answer is to make our Ai engines stop and smell the silicon roses?

Wednesday, April 8, 2009

It's a butterfly....?


I have spent a lot of time thinking about personality recently. My head is swimming with everything from the Five Factor Model to Enneagrams to Social Cognitive Theories.

Turns out that personality is not an easy thing to quantify or even qualify exactly what it means. Psychologists seem to abound with differing theories of the why and approaches of the how.

As we look at adding personality to our Ai technology we have had to pick thru much of the work and look at what is not just useful but also practical from an implementation perspective. Personality models that use language as a basis make a lot of sense as that is somewhat more easily controllable..ie using a certain word over another is an indication of personality type and while we all do it subconsciously people 'assign' personality by listening to those words we use.

In the movies and books Ais always seem to have a strong fixed personality type ...usually quite sarcastic it seems. From Marvin the Paranoid Android in Hitchhikers to Robbie the Robot or even to my good friend Hal. However the reality is that as we start to see more 'personality' come into our hardware and software constructs these will not be fixed personalities but rather will blend to match the needs of the user.

As least they will if I can just figure out what that ink blot actually is...

Friday, March 13, 2009

By the Numbers....


Last night I got a chance to work through the TIVO to see if there was anything good that it had decided to record for me. Low and behold there was an episode of a US drama series called Numb3rs. Its an 'FBI catches bad guys' show with an interesting premise that the lead FBI guy has a genius mathematician brother who uses maths to catch all the bad guys.

Anyway on this episode a super smart guy builds a 'sentient' AI that goes berserk and kills him. The maths guy at first claims it has passed the Turing test and they all get very excited only to find out that it's not really...it's just the bad guys making it look like the Ai did it. The Ai was just a sorry good for nothing dumb Natural Language Processing engine.

It was a great episode in that they really pushed out all the cool Ai cliches.

After watching it I thought I'd make my top 10 things that are clear signs that your Ai development program is in trouble:

10. You give your Ai a cute girls name (Brooke, Bailey, Ashely etc...)
9. Your ask your Ai to do something and it says 'I don't think so'
8. Your company installs klaxons and red flashing lights in the same room that you do Your Ai development in.
7. The Ai runs on a machine that has no visable means of disconnecting it from either the web or the electric grid.
6. You are testing out your Ai and it starts to pull facts about you directly from your FBI file and your 2nd grade high school report.
5. Your Ai is making more sense than your developers
4. Your Ai requests to watch "2001 A Space Odyssey" 12 times in a row.
3. Your Ai informs you that Pattern Matching approaches were so pre-singularity
2. Your Ai develops a taste for country music
1. The new boss of your company develops an evil English accent

Sunday, March 8, 2009

Twitter User loves his (or her) Mother...


I came across this article on CNN.com today.

Gender Neutral Pronoun

The article discusses why the English language doesn't have a gender neutral pronoun. Apparently users of Twitter are concerned. I assume this is because it takes up valuable letters to write 'he or she' in a twitter...oh the humanity of it.

The article uses the following example:

Consider the sentence "Everyone loves his mother." The word "his" may be seen as both sexist and inaccurate, but replacing it with "his or her" seems cumbersome, and "their" is grammatically incorrect.


The idea that English on the net is different from English in true spoken or written form is something I have talked about before. There probably is no reason why a gender neutral pronoun can't be introduced in netspeak and adopted a whole lot quicker than in regular English. For example we all understand the term 'lol' but few up us would actually use it when we are talking in real life to someone. In fact if you did use it people would probably just start to avoid you.

As we are building our Virsona Dialog Engine technology we have to be able to handle both 'proper' English and netspeak English. They are significantly different forms of communication. In fact we have learnt (learned) some pretty interesting things in terms of how the engine needs to adapt to online communication vs regular english communication. We use the term Natural Language Processing but perhaps what we are really starting to develop is Netural Language Processing.

So if the twitterati wants to come up with a suitable replacement for he / she - we will be ready for it.

Wednesday, March 4, 2009

CRTLA but SSEWBA - source AAAAA


I was working today with the team on one of the main problems for Natural Language Processing which is acquiring and maintaining the sense of topic of a conversation.

For most of us we can follow the ebb and flow of a conversation, know immediately when a topic has changed or ask clarifying questions if we think the topic has changed.

Do Instant Messenger conversations work the same way?

I'm not exactly sure they do. In this respect we actually have two different types of conversations.

Standard spoken, 'face to face' conversations and then separately written text conversations (think texting or IMing). In these text based conversations there is often not enough content to extract topic with out having the context available as well.

Add to that the fact that in text conversations the duration of the conversation tends to be shorter and the overall informational content is significantly less than in a standard conversation. When I am talking about informational content here I am really referring to body language, word inflections, tone etc.

So all in all its a difficult challenge to extract the topics of the conversation. It's incredibly useful piece of information to have because it allows the NLP Engine (our Virsona Engine in this case) to really select a much more appropriate response based on knowing that topic.

In case you were wondering the topic of this blog was CRTLA but SSEWBA - source AAAAA: Can't Remember the Three Letter Acronym but Someday Soon Everything will be Acronyms - source American Association Against Ancronym Abuse.

Tuesday, March 3, 2009


Today I was looking at information on a website called www.techcast.org.

TechCast is a technology think tank pooling the collective knowledge of world-wide technology experts to produce authoritative technology forecasts for strategic business decisions. TechCast offers online technology forecasts and technology articles on emerging technologies. TechCast also offers comprehensive technology consulting services as well as customized technology forecasting and studies. TechCast: Tracking the technology revolution

They forecast with a 67% confidence that Good AI will be available by 2023 and this will drive a US market of $570 Billion Dollars.

I had better get back to work now.

Monday, March 2, 2009

and finally a two headed turtle...


This weekend I had the opportunity to visit a state of the art television news room and spend time with the journalists as they prepared, produced and presented their local news program.

We had some good discussions about how modern journalism, including TV journalism, is changing into a multimedia experience. No longer is it enough to simply present a news story but it has to be backed up with immediate online content so that viewers presumably can dig deeper in the stories they have just heard.

It reminded me of a program the BBC had in the late 90s which was an automated newsreader avatar called Ananova which was a basic avatar combined with some T2S software that allowed it to 'read' news-stories.

It got me wondering about how we could apply our Virsona technology in this type of scenario whereby one links in automated feeds to a dialogue engine and allow it to interact in real time with rapidly changing feeds.

News then becomes truly interactive. Let it play on it's own as a background feed or interrupt the newsreader and ask more detailed questions. Skip over a story if you are not interested. A completely personalized, interactive CNN.

If that doesn't appeal to you there is always the latest story about a cat up a tree or a two headed animal... Two Headed Turtle

Friday, February 27, 2009

I never said she stole my money...


Natural Language Processing is hard. Make no mistake about it.





Here is a good example of the complexities of the English language that came from the Wikipedia entry for NLP:

"I never said she stole my money" - a simple sentence on the surface but is it?

Let's have a look at how this changes with the emphasis of the sentence.

"I never said she stole my money" - Someone else said it, but I didn't.
"I never said she stole my money" - I simply didn't ever say it.
"I never said she stole my money" - I might have implied it in some way, but I never explicitly said it.
"I never said she stole my money" - I said someone took it; I didn't say it was she.
"I never said she stole my money" - I just said she probably borrowed it.
"I never said she stole my money" - I said she stole someone else's money.
"I never said she stole my money" - I said she stole something, but not my money.

We have a hard time figuring out what this sentence means so how can we expect to automate that process. One of the key things that we are doing at Virsona is to try to understand sentences within the context of the conversation. This is how we process things in real life and is a vital component of being able to handle a conversation and understand as best as we can.

I just had a conversation with Babe Ruth who we currently have under development.

I asked him the question: "Who is your favorite teammate?" and he told me that it was Lou Gehrig. When I asked him "Who was your favorite team mate" and he gave me a great answer about how he loved playing for the Yankees. Good on ya Babe - have a bonza day.

Thursday, February 26, 2009

The future is so bright....


We have been testing out some great applications for our Dialog Engine recently. Once you start to think about how this type of technology can be applied the uses are many.

However there is a big difference between what you think a technology might be used for and how it actually ends up gaining traction and becoming common place.

The promise of Artificial Intelligence has been around for a good 50 years now but it is still not main stream in anyway shape or form. We are starting to see some applications that are 'behind the curtains' but we are still in the early days of general acceptance of AI interaction.

One of the reasons behind this is simply the utility of dialog. For the most part conversations with chatbots have been stilted, narrow and cannot veer off into the types of conversations that we generally have in real life. At Virsona we are building an engine that will handle this broader type of conversation and hopefully provide more utility and a better experience for people using our engine as part of an application.

We are going to start rolling out applications next month and are looking forward to seeing how people will react to them. Hopefully we will find our killer app. quickly or maybe we just have to let our customers guide us in the right direction.

Wednesday, February 25, 2009

Can I stereotype Canadians...?


I spent this morning with some very nice Canadians.

The meeting was about the credits/grants that are available to business' doing Research and Development in Canada. The program, called Scientific Research & Experimental Development (SRED), is very beneficial to companies as it returns some significant percentage of expenditures back to the company performing the R&D.

The have been doing this for a long time now and clearly it works. The theory behind it is that R&D jobs are high paying jobs and once you have built up the infrastructure in a region the jobs stay in that region and continue to stimulate the local economy.

We have heard a lot over the last couple of months about stimulating the economy.

Here in the US I am one of those firm believers that small business' are the way we are going to build and grow our way out of this mess. Anything the US government can do to help encourage, promote and support small business is going to have a powerful impact on the future of the economy.

Its not enough to say it though...both Federal and State governments have to remove the red tape and bring these programs on line efficiently and in a timely manner.

Otherwise we might all be watching hockey, drinking Molsen and sipping coffee from Tim Hortons rather than Starbucks in a few years, eh!

Monday, February 23, 2009

A Flock of Robots...


I spent this weekend with Robots. Lots of them. I was at the First Lego League State Finals for Florida. Virsona was sponsoring a local middle school who had made it through several qualifiers to the State Finals – not bad!

There were 48 team each whose robot had to complete various missions. You can learn more about this program at First Lego League.

I watched these kids age 9-14 and clearly saw their understanding of how to make robots do complex activities. It really made me think about the age based digital divide and the attitudes kids have to technology. For people 30 and older I think we still have a general fear of technology – especially Artificial Intelligence. We grew up on a diet of machine intelligence as a bad thing – mad computers trying to take over the world (aka skynet) – modern day examples of the same Frankenstein nightmare of our creations gone horribly wrong.

These kids however were fully engaged. They don’t view technology as "that stuff", rather it is an integral part of who they are. Of course you can pause tv, of course you can carry around 10,000 songs with you, of course you can build a complex robot in your spare time, of course you can find out any fact immediately, chat with historical figures in real time – no problems. That’s just the way the new world works.

So while us pre-Digital Generationals tend to question the use of technology, agonize over the implications and moralize over the applications, kids just embrace it. The sooner we can learn to embrace it too the better we will be able to engage with kids. If we don’t, they will be describing 3 types of intelligence pretty soon – Digital Generation Human Intelligence, Artificial Intelligence and pre-Digital Generation Human Intelligence.

Anyone know where I left my Flock of Seagulls LP?

Friday, February 20, 2009

Ground floor .. going up?


A couple of weeks ago I got onto an elevator. There was a small boy, maybe 5 years old, who got on with his mom and a few other people. The buttons were all lit up, 3,7,8,12,15. The little kid looks at his mom and asked her:
"Which floor are we going to first?".

I think this is one of the best questions I have ever heard.

In makes no assumptions at all about how the elevator is going to function. We all assume that elevators go in order because we have that experience and knowledge. That is what they are supposed to do. The kid had neither of those pieces of background so he asked a reasonable question.

As we look at our future roadmap plans for our Ai technology we consider the addition of domain experience as a critical component of being able to enhance the conversational experience. In this respect we will be able to not just have a reasonable conversation based on linguistic rules but add a totally different dimension by really 'understanding' what the conversation is about as well.

In case you were wondering the mom did indeed explain the rules of elevator operation to the boy. He looked pretty disappointed.

Wednesday, February 18, 2009

I think therefore I might be...


I saw this great job posting this morning for a PhD student in Logic based at the University of Groningen in the Netherlands:

Job Posting for PhD candidate in Logic

They are looking for a candidate with... a strong interest in Logic (especially in areas such as Modal Logic, Epistemic Logic, Dynamic Logic, Belief Revision theory, Game Logic, Quantum Logic, Linear Logic, Conditionals or Game Semantics) and its applications to modelling information flow, learning, agency, interaction and rationality in Artificial Intelligence, Theoretical Philosophy, Computer Science, Quantum Physics (including Quantum Information and Quantum Computation) or Game Theory. Fluent English is a prerequisite.

Is logic really a strong component of AI learning,interaction and rationality? In looking at the projects we are working on it seems to me that the easy part is anything that is logical. The difficult part is making sense of things that are not logical...ie the way we humans think and respond. In many respects this is why pattern matching has run it's course in 'chatbots'. Pattern Matching is a logical approach to responding to a conversation. ie I take your input..find a key word and then match it to a response. In real life we do something far more complicated and take significant illogical paths through a conversation.

In our dialog engine we are working on numerous paths that try to mirror how a conversation will flow. This is based on the context, content, tone and pace of the conversation. Being able to make 'jumps' from one topic to another or even introduce nonsensical components into a conversation mimic what happens in real life. Rarely does a conversation follow a fully logical path.

BTW I am going to take a wild guess here that having no social life is probably also a pre-requisite for that job.

Tuesday, February 17, 2009

My friends fail the Turing Test...



From Wikipedia:

"The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. Described by Alan Turing in the 1950 paper "Computing Machinery and Intelligence", it proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen."

When Turing devised this test he couldn't have seen a world in which tens of billions of instant messages and text messages are sent every single day. To use this test as a measure of intelligence seems to have become a bit outdated. Why? Well I have plenty of friends who I chat to over IM that simply would fail the Turing test. Does this mean they are not intelligent? No it simply means that they use IM poorly, their grammar is terrible and their use of IM shorthand is annoying.

Perhaps a better test, Turing2.0, if you will would involve paying attention, not using shorthand that I don't understand and perhaps knowing how to finish a conversation so I can get back to work. Now that would be intelligent.

Monday, February 16, 2009

All the Presidents Men...


Happy Presidents Day. I thought that it was only fitting that I had a chat with Abraham Lincoln this morning at www.chatwithabe.com. Here is part of what we discussed:

___________________________________________

Abraham Lincoln: Hello.
You: good morning
Abraham Lincoln: Good morning, Friend.
You: you were the president
Abraham Lincoln: A wise man knows what he doesn't know. I would not say I was the greatest but I tried to be honest to the American people.
You: what number president were you
Abraham Lincoln: I was the 16th President of the United States of America.
You: did you like being the president
Abraham Lincoln: Leader of the free world is not a bad job but sometimes the pressure of the responsibility was overwhelming at times, especially during the war.
You: they have a sale going on at Macy's in your honor
Abraham Lincoln: I recently learned that the U.S. mint plans to update the penny, which currently bears my profile on the front and memorial on the back, in honor of the upcoming bicentennial of my birth. I've always felt honored to have my likeness on the coin, but I agree the hundred- year-old one cent piece could use a new look.


____________________________

I would be more impressed with the penny thing too.

Friday, February 13, 2009

How do you spell Galapagoes..?


Yesterday, Feb 12, 2009 was the 200th birthday of two of the greatest influencers over the modern world. Both Charles Darwin and Abraham Lincoln were born February 12th 1809.

At Virsona we have been building a virtual Abraham Lincoln. You can chat with him at www.chatwithabe.com. In building this virsona I have come to learn a lot about Abe. As with other impressive statesmen he knew how to use words as a powerful force in their own right. It's not how much you say but rather what you say. His speech at Gettysburg was resoundingly criticized at the time as it was far to short. Longer, rambling speeches with lots of flowery words were the norm for that period. Getting Ai s to understand how to construct meaningful responses is a tough problem. Using such key tools as wordnet and concept net we can make good attempts but being able to create a Gettysburg Speech from scratch is a way off yet.

One of the interesting technologies we are playing with to help is the use of genetic algorithms. These are self replicating 'codelets' that live in a virtual eco-system and each generation follows the survival of the fittest rule. Each codelet measures how it performs in responding to input and then the fittest move on. At that stage they create slightly altered versions of themselves and start the cycle over. This takes time but we are seeing some interesting results.

So maybe if Abe and Charlie had met down in the Galapagos Islands and discussed how to create the next great conversational Ai they may actually have come up with something rather cool.

Thursday, February 12, 2009

The machines are coming...


There are a fair number of news articles around at the moment about the 'rise of the Machines'. With the new Terminator Movie due in May it's not really that surprising is it? A standard literary theme (man vs machine) which has been around a long time.

http://www.nzherald.co.nz/technology/news/article.cfm?c_id=5&objectid=10556240

However if the machines are ever really going to have a chance they first have to figure out how to talk with us. English as a conversational language is tough. Even many native speakers aren't that good at it - just switch on day time tv for examples.

At Virsona we are working on a world class Dialog Engine. I see everyday the complexity that we have to deal with in terms of trying to understand not just what is being said but how to respond to it. There are some increadible advances in Natural Language Processing but we still have a long way to go. Integrating situational awareness, conversation context and domain knowledge are all complex problems that we (as humans) handle without thinking about it.

Perhaps at the end of the day all the machines really need to be able to say is 'I'm afraid I can't do that Dave'.

How difficult can that be?