Recommended Reading for December 14th

Here are some recent posts, sites or articles I’ve found worth a read – you might enjoy them too:

Have suggestions or comments? Leave them below!

Taming the Information Beast: A vision for the future.

The folks at the ICE (Interactive Content Exchange) Conference recently laid out the challenge to bloggers to define their vision of the future in Canada. I’ve got a few spare minutes so I thought I’d take them up on it.

Where my vision starts is today. Today’s youngest citizens will be the first generation to grow up in an age of ubiquitous information and, to be horribly cliché, they are the future.

Information as a utility.

You want water?
Turn on the tap.

Light?
Turn on the switch.

Information?
Turn on the computer.

In just a few generations we’ve moved from a world where information trickled like a small stream to a place where there is a constant, unending & surging river of information. The notion of needing to have general information in your head is quickly becoming obsolete.

This change has many, many effects in many areas of our lives, but perhaps none more important than around our approach to education.

The information is now there, at our finger tips, anytime.

Students no longer need someone to stand at the front of the room and tell them the answer. What they need is someone to ask us the question, then help them learn how to find the answer.

In my future…

… the teacher has to become the Guide, not the Oracle.

What students need are people who will help them learn to learn, digest, think critically, and ultimately synthesize the information they will consume every day of their lives. The “what” of education is still very important but it should be presented to students in the form of discovery. How they discover and then learn the answer will shape their ability to succeed in the future.

… Innovation will be found in synthesis

In my future the best ideas will come from those people who can float on top of the river of information. These leaders will be able to take in all of the data around them, rapidly digest, synthesize, and finally remodel and deploy it to innovate.

… “We’ve always done it that way” will be the starting point, not the end.

Enterprises will become more agile as iteration and experimentation become the default behaviour. The future generation will treat the past as a place to begin not a place to stop. They won’t be afraid to ask “Why?” and won’t accept “Because…” for an answer.

Learning how to tame and manage information is the biggest challenge we’re facing. If we can ensure our future generations are properly equipped then the future looks good for all of us.

Attributions
Tap – Malla Mi | Light – Vnoel | Rapids – bcostin | Iteration – jremsikjr
| Map – Webber0075

Making the Connection: Integrated Content Lifecycles

One of the main reasons Clay Tablet came to be was our own experience with the frustrations and challenges of moving content through it’s lifecycle as soon as translation became involved. In our former life Robinson and I ran a couple of professional services groups which, on occasion required us to support more than one language (typically English & French). We first put a bilingual content management system in place in early 1998 for a large telecom provider up here in Canada. At the time we had the same attitudes that I think many still share today about how the translation lifecycle fit into the picture – for many the response to how they handle translation of, say, a website falls into one of three buckets:

  • Not our problem
  • We’ll export the content and you can cut and paste it back in (a.k.a “Not Our Problem”)
  • Have the translator login to the system and do the translation there (a.k.a “Not Our Problem”)

And for many years none of the attitudes were the “wrong” answer to speak of as translators were still largely working on purely people powered, manual processes. Anecdotally we’ve seen a lot of evidence that the content management and translation industries still operate in very different spaces. Some of the big guys on both sides have made attempts at connecting Content Management Systems with Translation Management Systems which is a good start. Recent announcements, such as LISA and Gilbane working together to offer a globalization track at the Gilbane conferences this year (CTT will be at the San Francisco show Apr 10-12) are also good signs that the two sides are entering into a dialogue that will benefit everyone.

That said though it’s still an almost daily occurrence where I have a conversation with a CMS vendor/integrator/customer or read a case study where the extent of “integration” involves throwing content over the wall at the translators or letting the translators come into the system to do their work right there.

Looking for the “Translate” Button
A Common Sense Advisory study recently suggested up to 91% of the content translated by a TSP is still down through a mostly manual process. I think a common mistake that occurs is many people unfamiliar with the translation industry make too big a leap when they start considering an integrated approach to translation management. We continually, as a I suspect many other translation technology vendors do, have to educate potential customers who immediately think that our software will perform the translation for them. The panacea for content creators and managers is that a company will come along and offer them a big red button with “Translate” written across it – all they have to do is push it and instantly their content comes back translated. Tools (toys?) like Babel Fish and Google’s Machine Translation system are dangled in front of them and to the unfamiliar they make it appear that the big red button is here (or quickly approaching).

The reality of course is that despite the advances by companies like Language Weaver, it is still a long ways off and the human factor of translation is still very, very much part of the process – and I believe it will always continue to be. The key thing to remember is at the end of the day every machine translation system out there still need to be taught – Taught with content translated by humans.

The real trick here is to back people a few steps form the big vision. They can, and should, have the “Translate” button today but their expectations around “time” need to be tempered. There’s no excuse today why the process from just after someone “creates” and the translator “translates” isn’t completely automated but the human translator, just like the author remains a critical component of the process.

The Content Lifecycle
The content lifecycle is typically depicted as the notion of one system controlling the entire process in one continuous loop. If a task can’t be performed within the confines of the application then content is typically unceremoniously spit out and content managers must pick up the mess and have the translation performed. At the translator’s end they get a deluge of assorted files form clients, in various formats and they must scramble to assemble the project and perform the translation.

My suggestion though is that this is only half of the picture, and not a scalable approach. Just as content authors should be free to work in the tools that allow them to be most effective, so should translation professionals – and neither side should have to perform the same, time-sucking tasks over and over just to get the content into and out of the translation process. For this to happen content needs to move between systems to suit the context of the tasks that need to be performed on it.

The reality is as content moves through it’s master lifecycle it actually moves through different systems which each have their own workflow “eddy”. Each system controls the content as it moves through its own internal workflow . This type of system is what I refer to as the Integrated Content Lifecycle (ICL).

Make Applications Open
The challenge today with the notion of an Integrated Content Lifecycle is many of today’s systems just can’t support it and competitive forces continue to keep applications closed, rather than open. Old school thinking was you build a killer product set, close it off and then made your money selling all its bits and pieces throughout an enterprise – another company wants to work with your client? They buy your software too! Easy!

Today though, that just won’t work – with the growing momentum of Service Oriented Architecture (SOA) or “Web Services” corporations are expecting more and more freedom and flexibility.

There’s also the matter of multiple organizations in the mix now. In a monolingual context it’s certainly possible that the content lifecycle will never extend past a single system, let alone past your company’s walls. With multiple languages though the likelihood of external vendors being involved is very high and you’re almost guaranteed to need to move that content from one system to another during the course of translation. As I mentioned above the tools used by a content manager, and those used by a translator can be very, very different.

I’m not saying a software company shouldn’t make their software interact with their own products – I’m suggesting the opposite in fact. By all means they should build their systems so they tightly integrate within the family, just not at the exclusion of all others.

Here’s the kicker – I mentioned before about competitive forces keeping systems closed, the reality is taking this philosophy should actually make it EASIER for an organization to make a better product, more efficiently with a completive edge that will translate into more sales. How? When you’re a closed system your product has to do everything for that given process – even if it is only really good at one component of the process. With an open system you a free to concentrate and improve on the specific features or functions your software does best and then find partners and other software that helps complete the picture for your clients.

If you have a closed system you have two options when a client asks you about a key feature that you don’t support/do very well. Try and roll it into the product (another feature to support/build) and hope the opportunity still exists when you’re ready or say “we don’t support that” and risk losing the sale.

For an open system the answer becomes much easier “We don’t natively support that but we’ve partnered with company X who does and we integrate directly to them”.

And to be clear, these aren’t just giant LSPs or massive multi-nati
onal corporations that I’m talking about, these are small companies whose entire margin for the year could be sucked up quite handily with a typical custom integration project. On all fronts there is a clear and pressing need for systems to openly communicate with each other in an easy, predictable fashion.

Increased Awareness & Conversation
I think the biggest challenge to date though is still one of awareness. It’s almost like a blind date by ambush – only by the time the CMS guys realized they were on a date they’re married, have kids and a most importantly a wife who’s tired of being just an afterthought.

I’ve seen many CMS companies patting themselves on the back for their “multilingual” support but once you dig deeper the level of support is that they display multiple languages (I mentally give them a UTF-8 “gold star” each time). Those who have some support for managing synchronous versions of multiple languages often respond to questions about workflow with one of the three answers listed above – it’s not because they don’t care, I think a big part of it is they just don’t know. Just about every CMS vendor who we’ve talked to, once we explain how it can work, is on board with the notion of bringing more automation to the process (it saves them a lot of headaches too).

As I mentioned before though, both sides are waking up to each other right now, well CMS is waking up, I suspect the translation vendors are laying awake thinking “It’s about time”. With the Lisa/Gilbane arrangement and the GALA pavillion at AIIM the awareness between the two camps is only going to grow and I think we’ll see a lot of exciting changes.

That said, the localization/translation industry could admittedly be a little more vocal – it’s taken us a good three years now to start to get some understanding of the industry as a whole, but it is still a weekly occurrence where we discover someone or something that was completely under off our radar. Slowly but surely blogs are starting to appear but I think there’s a lot more discussion that needs to happen by translation & localization professionals out to the market at large. The language industry as a whole is a very complex place and it really is encumbent on the people who know and understand it to help the people who turn to it understand the best practices.

(As a sidenote: if you write a localization/language/translation blog add it to the comments below and I’ll compile a list for a future blog post)

A sense I get a lot is that Translators, as frustrating as it is for them, will take just about anything from clients – a grin & bear it type scenario. This is unfortunately just a result of the fact that for many years that grin & bear it was really their only option, but with technology advances and changes on both sides it’s getting less and less necessary.

I was amazed when I was talking to one LSP and they mentioned there were over 20 different client systems that their translators would log into in order to work on jobs. 20! Integrated Content Lifecycles would allow that organization to use one central workflow tool, while allowing their clients to connect their workflow and management tools to them. Imagine the savings for both sides.

In Summary
Over all I think the general theme is “Be open and interact on all levels”. From the back office systems all talking to each other in one cohesive infrastructure to the people on the front lines working with each other to understand the full spectrum of what’s involved. The language and content management markets are at a major intersection where both need to get in sync with each other so we all go down the same, prosperous path together.

Why Blogs Aren’t Going Anywhere (and Aren’t Remotely close to Peaking)

Because of various Internet Connection issues at the office and a marathon 22-hour bed-to-bed, one day trip to Chicago I had fallen way behind on the feeds in my reader and despite endless scrolling (literally endless thanks to the river scroll on Google Newsreader) my number of unread posts continued to stay at “100+” – thankfully today seems to be a slow posting day so after some concerted early morning effort it seemed like there was a light at the end of the tunnel. It’s hard to just purge as I’m admittedly one of those people who hates to think I’m missing something.

After spending a few hours actually getting real work done I turned back to finish the last little bit of the “Backblog”…

This was the last post in my reader. I started reading Russel’s blog a few months ago after I followed another blog’s cross-linking “rabbit hole” and ended up there. Each month he does a poll of his readers looking for what they consider to be the best post of the month, anywhere. This month’s winner was “The Amateur Gourmet ” and his post entitles “Chutzpa, Truffles and Alain Ducasse” – It truly is a brilliant post and worth a read – I actually laughed out loud as I read it.

It also sent me off on a completely different thought tangent though. After scrolling through several hundred, and varied, blog posts over the past couple of days it really reinforced just how at the end of the day the reality is everyone just wants to be heard and, if possible, considered “relevant” once in a while.

Before the web the extent of your ability to have a “voice” depended on how deep your pockets were or who you knew at the local paper etc. These days sharing your voice takes only 5 minutes at a site like Blogger, a few clicks of the mouse and you’re online, listed in their directory, and putting your thoughts out into the world for any or all to read. Message boards/forums are a great example of this kind of transformation starting to take place.

As odd as it sounds I think one day blogs will almost be considered theraputic by many people. Personal case in point recently I’ve had a handful of truly frustrating customer service experiences – in the past I would fume about it, probably complain to my wife about it and then it would hang over me for a few days. Now I can blog about it, (usually) share how I think they can fix their issues and know (thanks to stats) that at least a handful of people from that organization see it. Five minutes of typing and I’m cooler, calmer and it’s out of my system.

On a side note: The scary reality ‘business’ in general needs to realize that the stats of how many people happy people tell vs. how many people disgruntled people tell are getting much, much bigger. Before the stats were somewhere around a person sharing good experiences with 8-10 other people and bad experiences with 18-25. The last two “customer rant” posts I did both received north of 100 views and still get traffic to this day. (Here’s the kicker though, the happy experiences also tend to get a higher portion of hits initially but anecdotally I don’t see them getting as much follow-on traffic down the road – people don’t tend to search for other folks having good experiences….)

Blogging is still very much for the early adopters but as the months go by I have to admit I find more and more people who I’m surprised to see they have a blog. A major challenge right now though is it takes some know-how and even technical ability to get your site properly indexed by the right sites (i.e. Technorati, Google Blogsearch, etc. etc.) as such it is also harder for people to build traffic which I think makes a huge difference in people keeping up with posting on their blog. It’s one thing to have a voice, it’s something else to have Sitemeter or Feedburner tell you no one is listening.

As we see these systems become more tightly integrated and turnkey (There’s no reason why Blogger shouldn’t have a screen during setup that sets you up in Technorati etc.) I think the medium as a whole will become a lot more approachable. At the end of the day I fully expect that by the time my two-year old gets to high school age Blogs (or whatever they morph into) will be an important part of their English & Communication classes (Creative Writing, Current Events etc.). I do believe we’ll hit a point where having a blog is just something you do.

One thing I’ll be really curious to see, and I hope the Freakonomics guys look at one day in the future, is how certain things like violent crime (especially school related and serial killers etc.) relate to the rise of the blog culture. many of these types of crime are people lashing out at something, essentially a cry for help when all of their other venues of having a “voice” have broken down. I’ve seen several cases personally where people on a message board or blog have rallied around someone in a moment of crisis. In many school shootings it’s later discovered that the person had notes, art or even webpages that would have been massive indicators that something wasn’t right. Will things like blogs start to help people identify problematic situations before they happen rather than provide hindsight clues to why someone did what they did?

Just imagine how things might have been different if some people in history had blogs, both good & bad. Off the top of my head: Martin Luther King? Hitler? Jeffery Dahmer? Anne Frank? How would blogs have changed how their life, and their impact on the world/society? How much faster and easier would King’s words have spread? Would Hitler have continued to grow as a successful artist and sold his works through his PhotoBlog/PaintingBlog & PayPal instead? Would someone have recognized that Dahmer needed help? Can you imagine if rather then emerging years later as a book Anne’s story was told through her blog?

Her first diary entry:

“I hope I will be able to confide everything to you, as I have never been able to confide in anyone, and I hope you will be a great source of comfort and support.”

During the recent skirmish between Israel and Hezzbollah I read the daily updates (when he had power/internet) of a young artist who was living in Lebannon when fighting broke out. His daily illustrations and comments provided a unique, and very different perspective of what was happening (I’ll try and find the link again) compared to the traditional media. He wrote it for no one in particular yet intended it for everyone. It really was one man putting his voice out there for all to hear – I wonder too, did his blog become a new alternative for him to doing what was likely the only other option available to young, angry Muslim men for so long (joining Hezbolah and taking up arms)?

At the end of the day we’re still very much in the infancy of this medium, sure it’s old hat for the Tech crowd but I think it’s only just beginning to enter the mainstream from a readership point of view, let alone the stage of active contribution.

Blogs aren’t going anywhere. And whoever is trying to claim it’s peaked is just linkbaiting.

Where are Nav Systems Headed?

PHOTO BY killrbeez on Flickr - click to accessGPS & Navigation Systems are technologies that fascinate me – and now, with the capability of hooking them onto the cellular network I think it’s only going to get more and more interesting.

Traffic Flow / Patterns
First off, adding cellular to the mix starts to create the prospect of two-way communication. Up until now GPS has largely been a fixed base of information, from a specific moment in time, on a DVD in your car or a flash drive in the unit itself. The problem with this of course is that route problems, construction & traffic etc couldn’t be factored in. Errors were a problem too – I’ve seen some whacky stuff spit out by route generating systems but the problem is there was no easy way to let the manufacturer know. Between that and the relatively small penetration in the market getting the bugs out of the system was pretty difficult.

I think this is why many of the systems like Google Maps & MapQuest exist – it allows them to have a much broader user-base push through a huge volume of route requests and allows two-way feedback for errors. I’ve sent Google Maps a few over the years each of which has been resolved surprisingly fast.

But now, with two way communication it becomes possible to actually start to use data from GPS enabled cars to create real-time traffic information.

Imagine that as you drive along your car is uploading stats on speed and the road your on to a central server, it’s also monitoring how Joe, a guy you don’t know, is fairing up ahead, further down the route it has suggested for you.

Suddenly Joe’s speed changes dramatically and he begins moving very slowly. The system checks with the regular traffic flows for that time of year and that weather. It can also look at information it is receiving from other cars on the same road. It can see that a few hundred metres down the road traffic is moving as fast, or faster than normal and understand that something is wrong somewhere in that area of the road between Joe & the other vehicle. (It will probably go one step forward and the systems will be rigged to the sensors within the car for the airbags. If the airbags in a car get tripped it can alert authorities as well as update the nav system servers that there is a crash).

This is where the real potential starts to kick in. Based on it’s knowledge of the traffic ahead of you it could now dynamically reroute your vehicle by polling vehicles moving on parallel routes and choosing the fastest route. This though brings us to an interesting challenge that I can’t wait to see how it gets solved (or is being solved).

Load Balancing
Take the situation I outlined above but multiply it by thousands. The reality is that over time most, if not all, cars will have some kind of GPS Navigation System. So what happens when these systems all start to react to travel troubles up ahead?

As they scan the possible alternates those routes should all generally be moving at or around normal speeds, oblivious to the accident because it has literally only just happened within the past few seconds. If all of the nav systems see the highest speed route they’re going to push all of the traffic seeking alternate routes in the direction of that road, causing a deluge of traffic which will cause another traffic jam – and probably making the problem worse.

People far smarter than me are going to have to come up with a method to ensure that not only are systems monitoring what is happening at the time but also looking ahead to see what kind of traffic is coming down the pipe. Working backwards from the accident it will need to start redirecting vehicles and balancing them out across all of the available alternate routes – up close it will be minor adjustments, exit & get back on the highway at the next exit and routes could get more drastically altered the further back people are.

Predicting Traffic
Above I’ve covered the notion of reactive navigation – i.e. something happens and the system reacts. Another area I’ve seen some blips of news about (and what sent me off down this path) is the area of predicting traffic. There are already companies like Inrix popping up that do predictive traffic modeling. What they do is collect all of the traffic data that is pouring in from these systems, the traffic flow monitors managed by the government etc. and then they can begin to understand how traffic reacts to specific events happening.

For example a baseball game lets out – in some cities this means 40,000+ people exiting from one specific building, many of them via cars. A predictive system models this and over time can build a pretty confident prediction about what will happen when the game ends. Now imagine you’re driving along, listening to the game on your favorite AM Radio station – in the background the predictive system is also “watching” the game, through News feeds etc. as the game ends it kicks into gear and begins to adjust your route to ensure you don’t get caught in the snarled mess that is about to emerge on the freeway surrounding the stadium.

Monetizing
So this rebalancing/rerouting brings up a few interesting ideas – Not the least of which is who gets priority. As the system starts balancing out traffic there may be some routes that are better then others. Perhaps there’s a business model in the quality of the traffic information and solutions you receive. Imagine taking this system and applying a freemium style model to it. Basic GPS systems will give you point A to Point B – with basic traffic information, no automated rerouting though. The next tier automatically reroutes you on a distance based model or only adjusts your route within a certain proximity of the issue ahead of you. Finally, the premium tier, not just handles your rerouting but also routes on roads that the system deliberately balances with a lighter load in the case of an issue.

I imagine too that this kind of data would be extremely valuable to municipalities when budgeting for road repairs and maintenance as well as doing traffic flow studies. If the same spots get jammed up everyday regardless of conditions they’ll know they’ve got a flow problem and can work to fix it.

In the End…
I think we’re getting a lot closer to this than many, even I, realize. I expect that most, if not all the points I raised here are in development or even being tested. I hope much of this plays out – I think it’ll be really neat to see in action.