Looking back @ 2007

2007 Has been an interesting year for me on many levels – I thought it’d be fun to step back and take a quick look at the year I’ve had…

In Business…
Professionally it’s been an exciting year as Clay Tablet grew and continued to gain traction. When I left the office for the Christmas break we had a full head of steam, interesting conversations with even more interesting companies were happening on an almost daily occurrence. I can’t wait to pick it up again on Wednesday and keep the train moving forward.

I’m not one for a big prediction list but I’d be fairly confident suggesting that 2008 is going to be a big year for the language industry. It’s been close to four years now since the initial spark of an idea, that later became CTT, was hatched. In the first few years we all had a distinct impression we were treading water, waiting for the wave.

None of us can express what but something in the market changed in the last few months and I’m excited to see how 2008 unfolds. I think John Yunker nailed some aspects of the change in his post “The End of Translation as We Know It” – every show or conversation I come away from I’m left in almost perpetual “wow” mode as I hear some of the things that are in the works – technology is taking over, not to replace humans, but augment and improve the skilled resources that keep the language industry moving.

In Life…


  • 1 more kid – we welcomed Kai into the world on October 21.



  • 149 Blog Posts – On average one every 2.5 days… not bad.
  • 150 (Avg.) Subscribers
  • 203 Photo Blog Posts at my Photo Blog “Found in Focus
  • 3146 Photos uploaded to Flickr
  • 10000+ photos taken (8300 since July)

All in all it’s amounted to one of the most fun and exciting years in a long time. Hopefully I can build on it for 2008…

Best wishes to all of you for a healthy and prosperous New Year!

Thanks for reading!


BTW: I’m on Twitter now – http://twitter.com/ryancoleman – If you’re using it too feel free to add me and I’ll follow you back…

In-chat Machine Translation via Google Talk

Saw on Techcrunch this am that Google talk know has a few bots you can add to your chats that will “translate” your conversations for you in realtime.

trans_botIt creates the translation through Google Translate so at the very least you’d want to be sure whoever you’re talking with understands that some translations might be downright wacky. Needless to say if it requires clarity and exact directions (talking through heart surgery, supporting nuclear power center operators or peace negotiations for example) this is not an appropriate tool.

The implementation is a little clunky – you need to add a bot to the chat for each language pair, in each direction. For example talking to someone in French you’d need both the English-to-French bot and the French-to-English bot. I’m guessing this is the result of someone’s 20% time at Google. (Edit: They’ve since confirmed it is)

I’d hope if they had actually roadmapped this feature the translation option could have just been built in to the tool. The system should really just know if the people who are talking to each other are using different language interfaces or preferences. If it detects that two people with different preferences start chatting just throw up a “We see you’re talking with someone who may speak a different language, would you like us to translate for you?” kind of message.

Right now it supports 29 language pairs, which is kind of odd as it leaves the conversation a little bit one sided… From my quick look it seems English-to-Bulgarian is the pair left out in the cold (but Bulgarian-to-English is supported.) (See edit below: real number is 24)

All in all, for the time being it’s a fun toy but it’ll be interesting to see how this functionality evolves…

Google Blog post

Edited to Add: If anyone out there can read the Chinese text in the screen cap I’d love to know how legible it actually is. The English is passable, which is probably why they used it, but I wouldn’t be surprised to find out there’s some crazy stuff happening on the other end.

EDIT: They published the wrong list of language pairs on the Google blog initially… there’s actually 24: ar2en, de2en, de2fr, el2en, en2ar, en2de, en2el, en2es, en2fr, en2it, en2ja, en2ko, en2nl, en2ru, en2zh, es2en, fr2de, fr2en, it2en, ja2en, ko2en, nl2en, ru2en, zh2en

[Read] Sean Howard’s take on Mind State Messaging…

A couple of weeks ago I did a post called “[IDEA] Mind State Messaging“, based on some conversations I’d had with Sean Howard, now of Lift Communications.

Today he posted his “half” of the conversation on his blog, craphammer.ca, in a post entitled “Modeling the Role of Communications“. He delves into his thoughts on the subject and it’s a great companion read to my original post.

Both of us would love to hear your feedback & thoughts on the ideas….

– Ryan

Dramatic Re-enactments via Visual Thinking

Came across this story (via Jalopnik) – in Japan there’s apparently been a bit of a hunt on for a man who has been driving up to pedestrians and spitting coffee in their faces.

It’s a pretty basic concept that doesn’t take much explaining but for whatever reason they elected to create an animation to illustrate what happened:


The actual news report follows – it’s a bit slow to load so if you want to save some time there’s screenshots at the bottom of this post that will give you the idea…

Screenshots (via Japan Probe):


[IDEA] Mind State Messaging

Far too many weeks ago I had lunch with Sean Howard (a.k.a. “Craphammer“) for lunch – we’d been talking to him, and his team at SpinGlobe, about some Clay Tablet marketing activities and he wanted to share some ideas.

One idea he brought up was the concept of “mind-states” – in a nutshell, trying to identify what state of mind your target is in. It was a new concept to me (thus why I’m CTO instead of CMO) but made perfect sense once he explained it.

We talked about how to visualize the notion and by the end of lunch we had a napkin sketch that consisted of mapping mind-states to messages, basically the idea of targeting each message to the specific mind states of each user.

Tweaking it Further
That night on the GO train home I opened up Illustrator and decided to play with the concept a little further. Over the following days Sean and I shot the illustration, along with comments, back and forth eventually coming up with this variation:


To which Sean simply responded “You’ve got to post this so we can discuss it with more people” – which I’m doing now, many, many weeks later (Sorry Sean :) )

The premise is pretty straightforward. The idea is broken into four general quadrants “Mind States”, “Needs”, “Features/Benefits” and lastly “Messaging”. Each oval represents an item in that theme. Obviously in practical use these ovals would be text or images describing the specific element. I also used size the indicate importance (or, in the case of features, strength/support) – the bigger the oval the bigger or more important the item.

Mind States rooted in Needs


My initial impression (and the bit Sean and I are still debating) was that behind each Mind State (which I at first considered to be an irrational state), there was a rational need or requirement behind it.

I’ve dropped the notion of rational/irrational from the latest version but the notion of a Mind State being rooted in a real Need or Requirement (or vice versa) is still very much there. For example, perhaps the Mind State was “I want that promotion”, the thinking is there’s a requirement or need(s) in the background that would resolve, or contribute to resolving, the mind state. In this case it may be “deliver on sales targets”.

Needs can drive States, States can drive Needs.

Features & Benefits to resolve Needs


This was all fine and dandy, but the next consideration was how mind states and needs related to your product or offering. For the most part it’s hard to link features and benefits directly to a mind state. As far as I can see, no feature I can put into my software will resolve the your mind state of “I want that promotion” but if you can uncover the true need then you can build features or identify benefits that help resolve it. By recognizing that the users mind state is actually driven by a need (deliver on sales targets) we can now see that our “Automated Lead Identifier” and “Motivational Tool-tips” features can help the user achieve their need, by keeping them informed and motivated, which will hopefully resolve their mind-state.

Messaging around Features to speak to Mind States


Because Features don’t typically speak directly to Mind States we need to close the loop with messaging. Messaging should speak to the mind state of the user. By working through the previous relationships we know that “I want that promotion” is resolved by the need to “deliver on targets” which our product helps solve by “automatically identifying new leads”.

If you can craft messaging that speaks to their emotional mind state you have the opportunity to strike a real cord with them, then back it up with true features that have their needs in mind.

Mind-State Messaging in Product Management

The other side effect that came out of this exercise was the realization that this could also be used to work through product management issues. By using items that are scaled (or colour coded etc.) to represent the importance you can quickly get an impression of how your product’s features & benefits stack up. The image below shows how needs can be mapped to features or benefits, and how you can quickly gauge if your product is living up to the needs of your prospects/clients.


In example (1) you can see that the need is tiny, and likely not very important in the grand scheme of things, but look at the strength (and assumedly the amount of effort that’s gone into it) of the feature in comparison. Likewise in (2) a huge need is basically going unfulfilled.

Obviously depending on who your specifically targeting you won’t be able to get a perfect match (3) – in theory you’d have different Mind state maps for each persona you’re dealing with in the sales/marketing cycle – but with this model it still gives you some insight into the holes you may have in your product. Especially if the same imbalance pops up on every model etc.

Anyways, this idea is still in the “half-baked” stage, but Sean and I really wanted to throw it out into the ether to see what others thought of it. I know Sean has actually thrown this into the mix on some pitches and projects over the past few weeks – but I’ll leave him to comment on where it worked/didn’t work.