As a photographer you have probably come across the term geotagging before now, a way to describe the location a photo was taken in metadata embedded within an image file. This geo tagging can be done in camera (some cameras contain a GPS), with a companion device (a geotagger) that tracks your location as you take photos and synchronises later, or manually with software (just like when you embed titles, descriptions and keywords).

From a stock photography point of view, embedding the location coordinates in this way seems to have had little uptake, even though the technology has been around for some time. There are several reasons for this:

  • Stock images are often location independent i.e. they don't contain a recognisable location and indeed are not meant to.
  • Only Dreamstime seem to support embedded geotag data and few stock photographers choose to embed it (a chicken and egg situation).
  • Poor Photoshop support - if the industry standard has no 'find this on a map' feature then it can't be important?

 

Consumer Market Uptake

There has been plenty of interest outside of the stock photo industry, or at least use of geotagged information even if people don't know what it is. Online services like panaramio and flickr have allowed users to geographically locate their photos for quite some time; Google maps allows millions of visitors to view images of a location they are looking at on a map or satellite view.

Amateur photographers have adopted this to a minor degree, but this is mostly a way of getting their work more visible, publishing on google earth etc. The main problem at the moment is that it all involves more 'work' unless your camera automatically embeds this information for you.

 

So is Geotagging important for Stock Photography?

At the moment, perhaps not so. But I think in some respects it will be, and it will be useful for a lot more images than you imagine, way beyond the obvious examples of travel, landscape and editorial stock photography. Imagine being a news editor and being able to search every image on every photo related website taken in the past 24 hours within one block of a pinpointed location of say "downtown Manhattan"? Imagine searching your own computer for images you took on a trip several years ago just by entering the location or clicking a map. At the moment you just can't do a search like that, the information is frequently not there and the search engines provide few tools to access it.

Sure enough some photographers will never need to geotag their work it's just not relevant for studio work, is it? Perhaps it is, perhaps if I wanted to search for a photo of a well known consumer product, and I wanted a photo of only the model that was released to the US market, could I filter on geolocaton? perhaps not 100% reliable, but it is perhaps something that end users might do?

The other important thing about geotagging is that it is semantically different to plot the location a photo was taken as 41°52′55″N 87°37′40″W (which is the city of Chicago, IL) than adding the possibly irrelevant keyword "Chicago" to our image keywords or location field. There is also no possibility of ambiguity between this location and Chicago "a place in South Africa" and worse Chicago "a musical". Imagine trying to search for images of the band Chicago taken in Chicago, don't laugh because I'm sure at some time some poor picture editor has been landed with that job, if they can't find it they can't buy it.

Privacy has been a concern of some people, at present there are some limited ways around this, like setting accuracy so that images are marked as being taken 'near this location'. This allows consumers to narrow down to town and suburb but not street and house.

 

Here comes the Semantic Web

Okay, so we have just gotten used to Web 2.0 with sites that can aggregate news feeds and blog posts, widgets that display our recent stock image uploads on a blog that other people read on their mobile phone. In a just a few short years people have become accustomed to being able to comment and vote on everything. Net users are able to repost and 'mash' live data that other people produce and share. We can feed our flickr images into a flash application on our facebook page so our friends and family can see our images, even better we can subscribe to dozens of microstock blogs and discussion pages and read them in our feed reader when and where we want. Fantastic isn't it. A few years ago such statements would have seemed "that's for geeks" and to a large proportion of internet users some of this technology still does seem that way, a lot of people are using web 2.0 even if they don't know anything about the underlying technology.

Web 3.0 is about semantic data. It's not about feeding data out and digesting it somewhere else, it's about tagging everything we create in a way that removes ambiguity by context. Then creating the tools to let people do what they like with that data in a natural way. As a photographer you are in some respects already are doing this, you tag a title and description, perhaps a location and you probably also mark your images as copyright or perhaps creative commons, but at the moment the technologies for doing this are rather limited. If you have ever cut and paste the code from creativecommons.org then you have already embedded some semantic data about the license that you have given to your image, that code does not just say creative commons with pretty icon, it also invisibly points to a license document that contains all the information.

A format called RDF comes to the rescue here, just one of a suite of semantic web technologies that will shape the future Internet. There is no need for you as a photographer to go away and read that spec now, but the icon below, just like the orange RSS icon is probably going to become a lot more familiar over the next few years...

RDF icon

Web 3.0 is still a few years off yet, it's not going to descend on us overnight, it will arrive gradually, websites will add nice features like faceted searches that can optionally mix in and filter by external data on request. Just as we now expect to be able to comment on or get a feed of something we find online we will soon come to expect that things we find like search results can also be mixed with other data we choose. All this without the need for us to worry how it will work because the data will know how to organise itself.

Semantic tagged content must be created and like the chicken and egg situation mentioned above it will be slowed by the fact that at the moment there is no immediate benefit of doing such tagging, compounding this is that software developers will be slow to add such features as standard until a critical mass of users demand them. It will take a while for the technology to standardise, and allow for the creation of end user friendly ways to access it. Take a look at MIT's SIMILE project and their exhibit widget if you want an example that will visually explain what is going on here. People creating semantically tagged content now or at least thinking about it in their planning it will be ahead of the game.

 

Semantic Web Applications

How will people put this to use? The answers is "in more ways than you can possibly ever imagine". It will be way better than google, or it will be the new google. Yahoo search monkey is already beginning to implement some very basic RDF tagged functionality including images relating to a page, location or individual, developers can then take that data and create applications from it to add extra functionality. Here is a (possibly wild) idea:

I'm an image buyer, I'm writing a story on gold mining in Australia, I need a photo to illustrate my story. Today I would need to go to several stock photography agencies and enter my search terms, in some cases the terms are ambiguous "mine, a weapon" or "mine, underground space" and gold,"a colour" or "a precious metal". If I had an internet full of images suitably tagged with RDF data then I would be able to find every image that was licensed for use as stock or free to use, taken in Australia, larger than the required number of pixels I need for my cover story and matching the keywords gold mine. I could plot that information against the date the photo was taken, the time of day (e.g. night time), or the geographic location. All of this would afford me more fine control to find just the photo I wanted instead of using clumsy hit-and-miss keywords. Doing the same search today would mean stringing together a query like "mine, Australia, gold, historical, daytime" and praying. Oh, and I need that image urgently! right now skype are currently testing their online presence rdf data, mix that in and I can show only the images with authors who are online now, so I can pick up the phone and call them, it will also get me their number so I don't have to go and search for it.

 

What about keyword spam? won't everything be unregulated? Yes, there will always be sites that contain lots of false or fake information trying to gain traffic to some sort of advertising messages. One of the great advantages of web 3.0 is that it would appear likely that visibility will not be driven by just the tagging content creators perform but also the quality of that tagging and the level of interest it receives. Sources like wikipedia have already produced trusted sources of all kinds of data. In the future I can imagine services providing organised data containing information to direct buyers to images of specific genres, and styles, popular photographers etc. End users will be easily able to filter 'spam' by just removing sources from their filtered results in the same way as spam blockers currently watch what email users mark as spam and delete it so that other users don't have to see messages from the same spammer.

So what's the value to being a stock agency in the web 3.0 future. Plenty, you could provide buyers with a feed of all your images, buyers could mash that data up and filter on only images that are also listed at a stock agency hence guaranteeing some level of quality. Exclusive images will still be of value, as will providing individual photographers with a way of offloading the payment processing and customer support to someone else who can do it better.

 

Summary

Even if Web3.0 is years off it It seems likely that sooner or later 'the internet' will start delving deeper into our images and their current metadata. Google images currently does not read EXIF data nor IPTC descriptions, Flickr can read EXIF but does not allow search by that info apart from images taken with the same type of camera. It's obvious that at some point google, if not someone else using their own sources, will add "map these results by geographic location" "display only images tagged with the IPTC Author : Jane Doe" etc. Right now is a fine time to think about what you tag each of your images with, do you bother to set the clock on your camera to the correct time and date? Do you leave the author and contact fields blank because none of the microstock sites use them? If you can geotag then do it, if you can't then evaluate if it adds any sort of value to the images you take and consider that when buying future hardware and software.

A few years down the line we won't be searching for "Author:Jane Doe", we will have a URI which describes Jane and all her work will be tagged with that (if she wants to share that information). Searching for an individual's pictures on a certain subject will be a lot easier and much less ambiguous, not open to confusion with anyone else who shares the same name. Jane will be able to separate her personal photos from work with a different pseudonym URI

As data becomes better organised then it will become easier to find stock images located anywhere online. Quality photography has always been vital, and quality will become even more important than the agency or website the images are represented by or the marketing done to promote them.

 

Related Posts

Keyword setting software (including geotagging)

Microstock editorial images


Add new comment0 Comments

It's quiet in here! Add new comment

Popular content