Annotation and Elaboration

Missing in the current digital landscape are well-developed strategies for annotating already- authored information. Highlighting, pop-ups, stickies, and comment sections of websites comprise the majority of formats for adding information to content previously generated by another person or organization. Many of these strategies use simplistic physical metaphors and typing (notebook pages, post-its) that limit the contributions of form and space location to the meaning of content. And in many cases, annotations are compiled chronologically, not by topic, author, or relevance to a particular search or use of information.

How can a user explore the intersection of image and text (memes) on webpages such as "Buzzfeed" and form relationships and associations in a larger context through the use of a browser plug-in?

 Original content

Original content

 Plug-in utilizes mapping structure. Users can explore associations based off memes.

Plug-in utilizes mapping structure. Users can explore associations based off memes.

 Users can dive deeper into associations.

Users can dive deeper into associations.

 Original content can be added further adding to meaning of meme.

Original content can be added further adding to meaning of meme.

 User can discover new meme or text-image intersection.

User can discover new meme or text-image intersection.

 

Since content is generated at an extraordinary pace on the Internet — image, text, image+text generate different meaning for different users based on their own cultural, social, political, historical and economical experiences.

Inferring meaning from images is complicated. According to Roland Barthes in Rhetoric of the Image, Image, Music, Text, when reading images we can establish a system for understanding signs and signifiers in image and text:

  • The Denoted: the literal image
  • The Connoted: the symbolic image
  • The Linguistic: words in, near or around the image