Tuesday, May 22, 2018

IBM on The Internet of Things

“Over the past century but accelerating over the past couple of decades, we have seen the emergence of a kind of global data field. The planet itself – natural systems, human systems, physical objects – have always generated an enormous amount of data, but we didn’t used to be able to hear it, to see it, to capture it. Now we can because all of this stuff is now instrumented. And it’s all interconnected, so now we can actually have access to it. So, in effect, the planet has grown a central nervous system.”

I really like the “global data field” idea and welcome the tone and language of the video. When most discussion on pervasive computing and the internet of things is generally obfuscated by jargon it is great to see people come up with clear explanations and use cases that have an impact on people’s lives.

We will see in the coming months how this kind of language will start to be picked up by designers and service design consultancies and can expect more scenarios will emerge as a result.


Dropped food algorithm. Should you eat it?


I find really interesting this sort of visualisations of commonsense processes. Apart from the obvious tongue in cheek nature and tone of this diagram, there is something captivating about the dynamic nature of a diagram illustrating what would be in real life a split second decision. It illustrates how the process can be broken down in discrete steps and then programmed.

“The 30-Second Rule, A Decision Tree” by Audrey Fukman and Andy Wright. via flowingdata


The ideas project, Clay Shirky

Clay Shirky, talks about emotion and media, in particular how we have now more “emotional” responses to certain types of media. This according to Shirky is due to the rapid spread of messages via social networks i.e. millions of tweets in a short period of time about swine flu might create an emotional response form a large group of people. Another aspect of social media that makes us have more emotional responses is that many of the messages we get through social networks come from friends and we already have an emotional relationship of some sort attached to them. One of the dangers from the latter is that we might take things at face value due to their provenance and that creates a fertile ground for an un-critical mass.

Shirky suggests an interesting idea where we might get a “rankisation” of everything in the google sense, and as we begin to analyse information from these networks we should be taking into account the context in which they occur, for instance the speed of spread through the network.


MIT sociable media, Personas

Personas is a project by the Sociable Media Group from the MIT Media Lab currently on display at their Metropath(ologies) exhibition. In their words:

It uses sophisticated natural language processing and the Internet to create a data portrait of one’s aggregated online identity. In short, Personas shows you how the Internet sees you.

Personas prompts you to enter your name and crawls the web looking for material about you in order to complete a visual profile based on the semantic analysis of the data found. The analysis is performed against a corpus of data to come up with the categories that better describe you, these resulting on the different strands that compose your unique visual identity.
The project is meant as a critique on the trust we put on automated processes to make sense of data by way of data-mining. In an age where reliance on data seem to be everywhere computers still come up with as many incredible insights than obvious mistakes.

The case in point are the three searches I tried. Bill Gates, Rupert Murdoch and Gordon Brown.
Gates seems to be big in the online word but not much seems to be happening on the business, philanthropy or software side or things.
Murdoch’s presence in the media world seems to be much smaller than what he has lead us all to believe.
Finally I was quite happy to see our prime minister be quite active on the political front. I hoped there was a bit more of a presence on the economic side of things.
Great project, beautifully executed. via TechCrunch


Everything is music

RjDj is an iPhone application that introduces the notion of “scenes” (akin to samples of real life audio) and mixes them with input taken form microphone or any other of the phone’s sensors. These are then mixed all together to provide an “auditory experience”. The results are amazing.

Chris one of the developers of the project demonstrates the product.

I find incredible how elegantly RjDj achieves what many other people try to do in other spheres – take real-time input of data and transform it into a beautiful output.

Edit. I’m not sure beautiful is the right way to describe the “auditory experience” delivered by RjDj. I was thinking in parallel about much of current work on information visualisation particularly deals with “making the invisible visible” by finding beauty in hidden patterns of data. I was however looking of the word compelling. There is something about the swiftness of the interaction and making sampling culture so immediate that makes it hugely compelling. This makes it vastly relevant when thinking about the possibilities of future real-time information systems.


Humanity-centered design

Aaron Sklar from IDEO talks about his panel at the upcoming Social Capital Markets (SOCAP09) conference in San Francisco. The workshop will is organised in partnership with GOOD and deals with the shift in design from human-cetered to humanity-centered approaches, where the design is not only focused on individuals but larger social groups. Humanity-centered design incorporates evaluation and feedback as an integral component of a process where validation of outcomes is essential. Humanity-centered design is also one the core ethos behind the surge of design-thinking discussions in business contexts. I look forward to the reports after the conference.


Significant objects


A talented, creative writer invents a story about an object. Invested with new significance by this fiction, the object should — according to our hypothesis — acquire not merely subjective but objective value. How to test our theory? Via eBay!

The auction for this Significant Object, with story by Claire Zulkey, has ended. Original price: 50 cents. Final price: $5.50 get yours!


Bokode: Imperceptible Visual Tags

Current optical tags, such as barcodes, must be read within a short range and the codes occupy valuable physical space on products. We present a new low-cost optical design so that the tags can be shrunk to 3mm visible diameter, and unmodified ordinary cameras several meters away can be set up to decode the identity plus the relative distance and angle.

The design exploits the bokeh effect, a photographic term referring to the æsthetic quality of point-of-light sources in an out-of-focus area of an image produced by a camera lens using a shallow depth of field.
There are two types of Bokodes described in the paper, active tags than use a LED to light the Bokode pattern from behind, and passive tags that replace the LED behind the Bokode pattern with a retro-reflector and use the camera Flash as the illumination source.

The system is targeted for consumer applications, collaborative interfaces, and augmented reality mapping applications. You can read full copy of the paper describing the technology behind it.


Click! A Crowd-Curated Exhibition at the Brooklyn Museum

Taking its inspiration from the critically acclaimed book The Wisdom of Crowds, in which New Yorker business and financial columnist James Surowiecki asserts that a diverse crowd is often wiser at making decisions than expert individuals, Click! explores whether Surowiecki’s premise can be applied to the visual arts—is a diverse crowd just as “wise” at evaluating art as the trained experts?

Click! is an exhibition in three consecutive parts. It begins with an open call—artists are asked to electronically submit a work of photography that responds to the exhibition’s theme, “Changing Faces of Brooklyn,” along with an artist statement.
After the conclusion of the open call, an online forum opens for audience evaluation of all submissions; as in other juried exhibitions, all works will be anonymous. As part of the evaluation, each visitor answers a series of questions about his/her knowledge of art and perceived expertise.

Click! culminates in an exhibition at the Museum, where the artworks are installed according to their relative ranking from the juried process. Visitors will also be able to see how different groups within the crowd evaluated the same works of art. The results will be analyzed and discussed by experts in the fields of art, online communities, and crowd theory.

The exhibition is organized by Shelley Bernstein, Manager of Information Systems, Brooklyn Museum.

Picture 53.jpg

I came across this today, hardly news as it is an exhibition from 2008. I love the idea of a crowsourced exhibition, even today this is still exploratory territory, perhaps what I like the most is how the idea has been turned on its head and the public jury was also asked to evaluate or at least state their own level of expertise in the subject. This self-assessment had an impact on how the vote was weighted in the ranking process.

The exhibition website documents the show well, providing tools to explore the process and results of the process.


London Nearest Tube and New York Nearest Subway

London Nearest Tube and New York Nearest Subway developed by acrossair, are augmented reality apps that overlay information on top of a real-world view displaying transport information. The application uses the new iPhone internal compass to tell you which stations are in the direction you are pointing the phone to.

see. read. write. do.

A research blog about interaction, design research, urban informatics, ambient computing, visualisation, emerging technologes and their impact on the built environment.

About me

This is a blog by Gonzalo Garcia-Perate a PhD researcher at The Bartlett, looking at adaptive ambient information in urban spaces.

Get in touch