Working with Recruiters – Heller Search

Heller logo

Paying it forward for other CIO / CTO position hunters: An executive level technology recruiter (MA Based) who seems to “get it” in a much more substantive manner than the Rolodex and cell phone crowd: – Heller Search Associates. Many valuable resources on the site and blog and a wealth of tips and approaches that are obvious – in hindsight.

Posted in Recruiters | Leave a comment

Stop. Restart. Go.


For the last several years I’ve done very little with this blog other than use it as a promotional tool for OpenCalais. That’s due to a combination of being incredibly busy but also having to be extraordinarily careful about speaking as an individual vs speaking as an executive representative of Thomson Reuters. Well – the second limitation isn’t an issue any more (see “About”) .

Well, now I can open the gates a bit. There are three key areas I’ll be concentrating on. I’ve picked these for two simple reasons: 1) I’m interested in them and know a little bit, 2) they’ll support my finding the kind of role I’m looking for next.

Interspersed with one-another and at no committed interval I plan to talk about:

  • Big Data: This is an area I’ve been working in for decades and I see incredible promise – and very high risk. It’s clearly the meme de jour – and like other big new ideas – we’re poised to oversell it, to focus on the technology and not the value and to generally get overexcited. I plan to talk a bit about how to make certain that the money you throw down the Big Data hole comes back to you and your organization. I’ll also talk about the major differences in why a VC might be excited and why you as a potential user should be disciplined.
  • Business Alignment: Over my last few roles I’ve had the opportunity to own or participate in owning the alignment of resource utilization (aka dollars out the door) to real or perceived market opportunity. What I’ve found has been bad. Really bad. On average 30-40% of Development / Engineering spend, 50%+ of Sales and Business development spend, 40% of Marketing spend and perhaps 30% of Product spend are not well aligned to a simple model of what the marketplace place wants to buy. There are some methods for fixing this quickly that I’d like to share. Everyone pulling in the same direction can make a big difference – fast.
  • Odds and Ends: Interesting technology developments, Apple geekdom, intriguing new business models and the occasional juicy rumor.

Ok, that’s where this blog is headed. Next week I’m going to start digging into Big Data and how this big fish is starting to rot from the head down. Time to put your checkbooks away and make sure you have some of the basics in place.

Posted in Meta | Tagged , | Leave a comment

Goodbye OpenCalais, Thanks and Stay in Touch


OpenCalais community members:

I’ll be leaving Thomson Reuters and hence ending my involvement with OpenCalais in the next few weeks. I wanted to take a moment to thank everyone – from users to journalists to convention organizers – for making OpenCalais the success it’s been.

The initial years of OpenCalais were among the most amazing in my career. The level of passion, the number of smart people I met and the number of interesting ideas I heard was amazing. While OpenCalais was never a full time job for me – it was certainly where the majority of my passion lay.

Please stay in touch at or via LinkedIn ( I have no idea where I’ll be heading next – but I’ll keep LinkedIn up to date. Recently I’ve been running all Product and Engineering for Reuters Media – I’m looking for a similar level of challenge.

OpenCalais is being left in good hands. We’ve transferred ownership of the initiative to Philip Kardos –  one of my most senior product managers – and I am absolutely confident in the continued stability and growth of the system. And – our fantastic and responsive community manager Fran Sansalone’s role remains unchanged.

Thanks again. It’s been a wild ride and I’ve loved every minute of it.


Posted in Uncategorized | Leave a comment

Spring Cleaning and Some Touch-ups (OpenCalais)

A New OpenCalais Release On the Way

In just about one month we’re going to open up the next release of OpenCalais for beta testing. While the upgrade should be 100% backwards compatible – it’s worth setting aside a little time for testing as well as exploring some new features.

What’s Coming?

Under the covers there are a number of improvements to our processing pipeline. As an end user you won’t see these – but they set the stage for greater flexibility in the future.

On the user-facing side of the equation you’ll see a number of new entities, facts and events related primarily to politics and intra and international conflict. Doesn’t look like either of those will be going away soon – so we thought they were worth implementing. You’ll see new information in candidates, Party Affiliations, Arms Purchases and a number of others.

In addition to these new items, we’ve also enhanced our SocialTags feature for greater accuracy – in fact, you’ll see a number of accuracy improvements across the board.

Next Steps

So – set aside a little time for a quick test in about a month. If you care about elections and conflict – take a look at the new features. We’ll run the beta for approximately a month to gather any issues and will roll out into production following that.

Posted in Calais | Tagged , , | Leave a comment

“News Ninjas” and Cryptic Twitter Posts

A few days ago I tweeted that I was building a team of News Ninjas and was looking for candidates with a good mix of news (newspaper or broadcast) and technology capabilities. That was sufficiently cryptic to generate a few questions – now I’ll try to give a few answers.

We want to hire several great people! Now!

I’m in the market to hire (yes, in response to *many* inquiries, with actual real dollars and desks and benefits and all that stuff and in response other inquiries – yes OpenCalais is involved but we’re going way beyond just that) a small team of news passionate people to help us with our mission to transform the Reuters News Agency into the next generation partner for our clients.

For well over 150 years Reuters has been a leading provider of news to the world. Our customers cover the globe and include broadcasters and newspapers from gigantic to mid-sized. We provide them with a full range of media including text, images, video and still photography. That’s our core and our heritage – and we’ll be working hard over the next few years to expand and improve on it.

But – there are more opportunities to deliver value to our clients. The news industry is changing dramatically – and we believe we have the opportunity to offer an increased range of services and products to help make our clients successful. In some cases these new capabilities will benefit just our clients – in others we hope to leverage our investments to benefit the news industry as a whole. We know some of what we need to get built and we’re engaging with our clients, prospects, advisers and the community as a whole to understand where there might be additional opportunities.

We’re Getting ready to do New & Cool Stuff

Obviously I can’t talk about specifics in a public forum (at least not yet) – but we’re looking at solutions that range from archive monetization to more flexible content syndication to better newsroom workflow capabilities to tools to enable investigative journalism – basically anything that helps improve our customer’s business.

What do we need to get all of that done? The answer is pretty simple: we need great people that understand the business and the technology that we can bring to bear. That’s what I’m looking for. In these positions you’ll be a member of a team inventing and responding to potential business propositions / capabilities, evaluating them in the marketplace, building, delivering and evangelizing them – basically from concept to execution. Our list of requirements is pretty straightforward:

- You need experience in the news industry – broadcast or newspapers – online/digital is a big plus
- You need to have some technical background. It’s important our team members bridge the business / technology gulf themselves
- You have a diversity of experience. You’ve played different roles in different projects.
- You can work as a member of a team. Really. I mean it.
- You’re comfortable with and have a track record in public-facing evangelism of your ideas.
- You’re (probably) already located in NYC
- You know how to get things done – by managing, leading and pitching in and working yourself

That’s it.

If you’d like to be considered (and I promise to carefully read every CV submitted) please go here and drop an application. This is one place where we need to follow the process – I’m happy to answer questions via email address below – but applications need to come through the machine.  If you don’t think you are a candidate yourself but would like to pass along a name please drop me a note at Please feel to drop questions there as well.

Big changes are coming. Come make it happen with us.


Posted in Uncategorized | Tagged , , , , | Leave a comment

Why OpenCalais?

(Re-purposing a post of mine from

Over the last few months you’ve probably seen a number of announcements about how OpenCalais has been chosen by one organization or another to support its business.

In a number of recent meetings I’ve been asked the (very fair) question, Why OpenCalais and not one of the other entity extraction services out there?

Given that the question seems to be coming up more often as the number of extraction services increases, I thought I’d get my best understanding of why many major players we’ve announced (and an equal number we haven’t) have chosen to go with OpenCalais. And – at the end – I’ll mention a few reasons why others haven’t chosen OpenCalais.

So, in no particular order, why do organizations choose Calais?

Thomson Reuters

OpenCalais is provided by Thomson Reuters – the largest professional information organization in the world.

If you’re interested in kicking around some semantic technologies in your spare time this doesn’t really matter. If you’re incorporating those technologies deep within your business – or, as is true with many users – actually building a new business on top of them, this becomes pretty important. Basically – you need to know that the service is going to be there for you.

Facts & Events

With the increase in structured content assets like Wikipedia / DBpedia, it’s become pretty easy to knock out a basic entity extraction tool. And – while we like entity extraction as much as anyone else – it’s really just the tiniest starting point in what you can and will need to do.

OpenCalais extracts a wide range of facts and events from unstructured content and lets you know what’s happening in your content – not just tags for things.

  • Facts are things like “John Doe is CEO of XYZ Corporation.”
  • Events are things like “XYZ Corporation today announced that it would acquire ACME Corporation.”

OpenCalais is the only service that does this in a production-strength manner.


OpenCalais stays up. It’s hosted in mirrored data centers thousands of miles apart from each another. It’s monitored 7*24. It basically doesn’t go down – even during system upgrades and maintenance. We stopped adding 9s after we got beyond 99.99% uptime.


We’ve been building the tools underneath OpenCalais for over a decade. They’ve been used by hundreds of organizations and many many thousands of end users. One of the things we’ve learned is that accuracy matters. While no NLP system is perfect, we’re convinced ours is the best and we have some ideas in the pipeline to increase accuracy even more.


We basically focus on providing great semantic plumbing. But we know that not everyone wants to be a plumber. We’ve worked to integrate (or motivate others to integrate) OpenCalais with a wide range of tools including Drupal, WordPress, WordPress Multiuser, Oracle, Lucene, Coldfusion, Flash, Firefox, Prolog, Lisp, Django, Java, PHP, Python, Alfresco, Perl, .NET, Ruby, TopBraid and a few others.

From content management systems to language-specific libraries – there are lots of ways to get started quickly.

Linked Data

We’re serious about Linked Data. We’re also worried about the proliferation of incorrect links and the effects of link rot. So, rather than just pointing to Linked Data assets out on the cloud and risking that they’ll go stale, we host our own Linked Data cloud, which is kept up to date with both Thomson Reuters contributed content as well as regularly validated links to other sources such as DBpedia, Freebase and others.


Pure semantic extraction is great – but sometimes you need more. If you’re writing about Porsches and Ferraris you’d probably like to have categorization concepts like “sports cars” and “automobiles” returned to you with your semantic metadata. OpenCalais does this via our ever-improving SocialTags concept tagging capability. It’s good now, and it’s going to get a lot better soon.


OpenCalais is here to provide great semantic plumbing. We’re not trying to sell ads. We’re not trying to provide the prettiest decorations for blogs. We build the plumbing – you architect the solutions.

Now, in a spirit of transparency, here’s why some people don’t choose OpenCalais:


We’re great in English and okay in French and Spanish (we extract entities but neither facts nor events in these two languages). We intend to implement more languages in the future – but for the time being we’re concentrating our efforts on improved functionality and accuracy in English.


OpenCalais isn’t a simple tagging tool. What it returns to the calling application is a reasonably complex RDF construct. It takes a little time to get up to speed on RDF and how to use it in your applications. We think it’s worth it because it’s the most flexible and powerful format we know of.

Performance in Knowledge Domain ‘x’

Where ‘x’ is fashion or square dancing or rugby. OpenCalais is optimized for performance in the general world of business – that’s where we excel.

We have extended OpenCalais to take steps in other areas (such as sports, media, etc.) – but if you need deep semantic extraction capabilities related to protein binding – there are better places to look.

Posted in Calais | Tagged , , , , | Leave a comment

OpenPublish; Deploy a high performance (semantic) web site in hours – not months.

A week ago or so our partner – Phase2Technology – announced the release of OpenPublish. The dust has settled from DrupalCon a bit and I wanted to take a few minutes to talk about what OpenPublish is and why it is very very important.

The quick background. Drupal is a hot Open Source content management and web site deployment platform. It has probably tens of thousands of users and thousands of internal and external deployments. Suffice it to say it’s the hot thing in Open Source CMS platforms right now.

Drupal let’s you build a site fairly quickly. It won’t be pretty and it won’t have much functionality – but it can be up and running in a matter of minutes.  Then you can spend the next few days, weeks or months giving it a nice look and feel, finding the extensions for the functionality you need and perhaps building some glue to hook it all together. Weeks or months later you’ll have the basics in place and can start to think about the advanced features you’d like to implement – in the next few weeks or months.

(Elapsed time – maybe 1-3 months)

Or, we can do it the OpenPublish way. Download the installation setup (from here), run the setup, Get a key from Open Calais (here), enter it into OpenPublish.

Done. Start writing or grabbing feeds. You’re finished.

(Elapsed time – maybe 1 hour)

But – here’s where things start to get very interesting. OpenPublish isn’t just a quick way to install Drupal. OpenPublish uses Calais semantic technology (look at that – seven paragraphs in and the first time we’ve used the word semantic) to provide features even the big guys don’t have. Here are a few examples:

  • Articles are automatically tagged with the people, places, companies, geographies and other elements inside them. You can do this automatically by setting relevance thresholds or do it interactively where Calais suggests and you approve.
  • You can automatically tag your archives. Thousand of articles – no problem. Millions – give us a call and we’ll work something out to get it done in a day or two.
  • You can automatically create topic hubs on any tag (e.g. Drupal vocabulary), set of tags, logical arguments about tags. Want a topic hub on “Natural Disasters” in California? About five clicks and it’s done – and it will maintain itself forever.
  • “More like this” functionality is built right in. Your readers can see other related content on your site or – at your option – on other blogs or mainline news sources.
  • Map integration, RDF generation and exposure, lots of other cool stuff.

What we like is that the semantics aren’t the goal here – they’re simply the enabler for a high performance publishing platform.

If you’re a publisher and you want help customizing the installation you should contact our friends at Phase2 and they’d be happy to help. If you’re a smaller non-profit, an advocacy organization or generally someone who doesn’t have a lot of money or time – OpenPublish can literally get you up and running in hours.

The Calais Initiative is proud to sponsor the development of the Drupal modules underlying OpenPublish and proud to work with the Phase2 team – they’re a great group of people.

P.S. It’s all free.

P.P.S Nancy Kho wrote a great overview here.

Posted in Calais, Uncategorized | Tagged , | Leave a comment

Metadata as a Service

Kas Thomas (of CMS Watch) wrote two great back to back posts on his blog.

In the first post, Kas discusses the power of “Metadata as a Service” – in short what can you make happen if metadata generation is widely available to your content creation, management and consumption tools.

What’s great is that he doesn’t stop there. In his second post he goes on to construct an OpenOffice plugin that automatically meta-tags your content as you’re creating it. This has obvious benefits for content management and search across or outside the enterprise.

Now – take what Kas has done and extend it to the Linked Data cloud as we’ve done with Calais 4.0. Beyond metadata we now have super-metadata. By using the Linked Data capabilities built in to Calais you could not only tag an article as being about say “IBM” – but insert the fact that IBM is headquartered in New York, That New York is part of North America and that IBM has an SIC code of 8742 and others.

Here’s the Calais URI for IBM: Start exploring the DBPedia links at the bottom and I’m sure you’ll think of some interesting use cases.

Posted in Calais | Leave a comment

Life in the Linked Data Cloud: Calais Release 4

(Re purposed from the blog post on

The Gist: Release 4 of Calais will be a big deal. In that release we’ll go beyond the ability to extract semantic data from your content. We will link that extracted semantic data to datasets from dozens of other information sources, from Wikipedia to Freebase to the CIA World Fact Book. In short – instead of being limited to the contents of the document you’re processing, you’ll be able to develop solutions that leverage a large and rapidly growing information asset: the Linked Data Cloud.

The goal of this post is just to give our community a heads-up to start thinking and planning.

During the course of 2008 we’ve had three significant releases of Calais, with additional point releases nearly each month along the way. We’ve added new knowledge domains, improved performance, delivered integration with a range of tools and developed new user-facing applications. It’s been a year of amazing growth in our developer community and the capabilities of the Calais service.

While every previous release has accomplished something significant, Release 4 is going to introduce something that we think is game changing – and that’s life in the Linked Data cloud. It’s important enough that we want to give all the members of our community time to think about it, prepare for it and get your brains in gear on how you might use it.

Every release of Calais up to this point has focused on meeting the need to extract semantic information from text. Release 4 builds on this by creating the ability to harvest the Linked Data cloud using that semantic data.

For this all to make sense we need to introduce a few things. If you already know about de-referenceable URIs and the Linked Data cloud – skim ahead. If not – please take a moment to ingest the background you need.

When you send text to Calais it returns several things: entities, facts, events and categories. For purposes of today’s discussion we’re going to focus in on entities. Entities are just what they sound like – they are things. Some specific examples are people, companies, organizations, geographies, sports teams and music albums.

When Calais extracts an entity from your text it returns (at least) a few things. It tells you the name of the entity and it tells you what type of entity it is. Unlike other extraction services we don’t just return a list of things – Calais tells you it found a thing of type=Company and a value=IBM or type=Person and value=Jane Doe. But – there’s something else Calais returns that hasn’t meant very much up until now: it returns a Uniform Resource Identifier (URI) for that entity. There’s nothing magic about URIs – they are simply a unique identifier for every entity that Calais discovers. Here’s an example (it’s not pretty) of what the URI for the Company IBM looks like:

Well, that doesn’t look very useful does it? If you were to pull up that URI (when Release 4 is out) all you’d see is RDF with links to places called DBpedia and Freebase and Reuters. But keep those links in mind: they’re the key to a whole new world.

Linked Data is the name of a movement underway (not too surprisingly, initiated by Sir Tim Berners-Lee) that sets a standard and expected behavior for publishing and connecting data on the web. This isn’t about publishing web pages – this is about turning those web pages into data that’s accessible to programs to work with. We’ll give you a quick example to make it real: Wikipedia is one of the single largest sets of information across a broad range of topics in the world. It’s really great if I’m a person who’s casually looking for information on a particular topic – but it’s not so great if I’m a computer program that wants to use that data. Why? Because it’s formatted and organized for people – not computers – to read.

But Wikipedia has a twin – in fact a Linked Data twin – called DBpedia. DBpedia has the same structured information as Wikipedia – but translated into a machine-readable format called RDF and accessible via the Linked Data standards. And, Wikipedia is not alone. A growing cloud of information sets from DBpedia to the CIA World Fact Book to U.S. Census data to Musicbrainz – and many others – is becoming available. What’s important is that this cloud is 1) growing, and 2) interoperable. There are “pointers” from entries in DBpedia to entries in Musicbrainz and back to entries in Geonames – it’s another big Web – but this time it’s a Web of Data.

So – lots of words and arcane concepts. Let’s try to bring it all together into something that makes sense. We’ll put one sentence out there – and then we’ll give a few examples.

Beginning with Calais Release 4 you and the programs you develop will be able to go from many of the entities Calais extracts directly to the Linked Data Cloud.

A simple example:

I want to process today’s business news. For each article I want to extract all of the companies mentioned – but only if the article also mentions a merger or acquisition. I am only interested in companies whose headquarters (or those of their subsidiaries) are located in New York State. Do all of that and give me a widget for my news site titled “Merger Activity for NY Consulting Companies”. And oh, by the way, this isn’t a research project – I want you to do it real time for the 10,000 pieces of news I process every day.

How would you do that? Option 1 is to hire a bunch of researchers, give them a fast internet connection and teach them to type very very fast.  Option 2 is to write some code that looks like this:

For each Article

Submit to Calais, get response

If MergerAcquisition exists then

For each Company

Retrieve Calais Company URI, extract DBpedia link

Send Linked Data inquiry to DBpedia, get response

If CompanyIndustry contains “Consulting”

If CompanyHeadquarters = “New York”

Put them on the list

For each subsidiary

Send Linked Data query to Dbpedia, get result

If CompanyHeadquarters = “New York”

Put them on the list

(lots of endif’s)

Print the list

That really is a pretty straightforward example. How about companies in the news with at least one subsidiary doing business in an area that the CIA Factbook considers dangerous? Or books released by authors who attended Harvard who live in Ohio? Or … . We think you get the idea.

So. The summary. The combination of semantic data extraction (generic extraction, tags, keywords won’t do the trick) + de-referenceable URIs (entity identifiers you and your programs can retrieve) + the Linked Data Cloud = amazing stuff.

We’d like you to start thinking about it.

Posted in Uncategorized | Tagged , , , | Leave a comment

Developers! Developers! Developers!

One of the really fun parts of working on the Calais Initiative is our community of developers. They toil in quiet and then – surprise! – they release something really cool and interesting. So – I wanted to take just a moment to highlight two new Calais R3.1 applications that popped up this weekend.

iPlayerist by Geography

iPlayerlist is an interesting application that takes shows available via the BBC iPlayer and allows you to find them by topics, times and other attributes. Andy @ has just rolled out an enhancement that uses the new Calais geo-location capabilities to find shows based on the locations mentioned in their descriptions. Available here I think it’s a great example of a simple, clean way to improve the user experience using semantic metadata extraction. Unfortunately viewing many of the resulting videos won’t work unless you’re in the U.K. This isn’t iPlayer’s fault – it’s a limitation the BBC has put in place.






Calais Geo Location Tutorial and Demo App

Guilhem Vellut has put together a nice demonstration app that shows the Calais geo-location features in action. While I really like the application (you can see it here) it’s the blog post he wrote giving the details of exactly how he built the applications – including code samples – that’s really great. By investing the time to document what he did and how he got everything working together he’s provided a great jumpstart for anyone else wanting to experiment with Calais geo-location. Thanks!

Posted in Calais | Tagged , | Leave a comment