prostheticknowledge:

Shadow QR Code 
Promotional physical installation casts QR Code at specific time of day to encourage business during downtime. From Springwise:

Periodic lulls in business are a fact of life for most retailers, and we’ve already seen solutions including daily deals that are valid only during those quiet times. Recently, however, we came across a concept that takes such efforts even further. Specifically, Korean Emart recently placed 3D QR code sculptures throughout the city of Seoul that could only be scanned between noon and 1 pm each day — consumers who succeeded were rewarded with discounts at the store during those quiet shopping hours.
Dubbed “Sunny Sale,” Emart’s effort involved setting up a series of what it calls “shadow” QR codes that depend on peak sunlight for proper viewing and were scannable only between 12 and 1 pm each day. Successfully scanning a code took consumers to a dedicated home page with special offers including a coupon worth USD 12. Purchases could then be made via smartphone for delivery direct to the consumer’s door.

More info (and cheesy video) can be found at Springwise here

prostheticknowledge:

Shadow QR Code 

Promotional physical installation casts QR Code at specific time of day to encourage business during downtime. From Springwise:

Periodic lulls in business are a fact of life for most retailers, and we’ve already seen solutions including daily deals that are valid only during those quiet times. Recently, however, we came across a concept that takes such efforts even further. Specifically, Korean Emart recently placed 3D QR code sculptures throughout the city of Seoul that could only be scanned between noon and 1 pm each day — consumers who succeeded were rewarded with discounts at the store during those quiet shopping hours.

Dubbed “Sunny Sale,” Emart’s effort involved setting up a series of what it calls “shadow” QR codes that depend on peak sunlight for proper viewing and were scannable only between 12 and 1 pm each day. Successfully scanning a code took consumers to a dedicated home page with special offers including a coupon worth USD 12. Purchases could then be made via smartphone for delivery direct to the consumer’s door.

More info (and cheesy video) can be found at Springwise here

thisistheverge:

Russian satellite’s 121-megapixel image of Earth is most detailed yet
There’s been a long history of NASA-provided “Blue Marble” images of Earth, but now we’re getting a different perspective thanks to photos taken by the Elektro-L No.1 Russian weather satellite. Unlike NASA’s images, this satellite produces 121-megapixel images that capture the Earth in one shot instead of a collection of pictures from multiple flybys stitched together. The result is an image that’s the highest-resolution image of Earth yet. 

thisistheverge:

Russian satellite’s 121-megapixel image of Earth is most detailed yet

There’s been a long history of NASA-provided “Blue Marble” images of Earth, but now we’re getting a different perspective thanks to photos taken by the Elektro-L No.1 Russian weather satellite. Unlike NASA’s images, this satellite produces 121-megapixel images that capture the Earth in one shot instead of a collection of pictures from multiple flybys stitched together. The result is an image that’s the highest-resolution image of Earth yet. 

prostheticknowledge:

Logitech FotoMan Digital Camera 
The first commercial digital camera, released in 1990. From The National Media Museum:

Logitech FotoMan digital camera, made by Logitech in Switzerland, 1990.The FotoMan was the first digital camera to go on sale. Fuji had created a digital camera as early as 1988, but it was never commercially available. The FotoMan took monochrome pictures, 32 images could be stored in the camera, and images could be uploaded to a PC using the cable supplied with the camera. The price of the camera on its release in 1990 was £499. 

[Link]

prostheticknowledge:

Logitech FotoMan Digital Camera 

The first commercial digital camera, released in 1990. From The National Media Museum:

Logitech FotoMan digital camera, made by Logitech in Switzerland, 1990.

The FotoMan was the first digital camera to go on sale. Fuji had created a digital camera as early as 1988, but it was never commercially available. The FotoMan took monochrome pictures, 32 images could be stored in the camera, and images could be uploaded to a PC using the cable supplied with the camera. The price of the camera on its release in 1990 was £499. 

[Link]

prostheticknowledge:

AIKON Project

Sophisticated drawing machine features robotic drawing arm and camera sensor. It is called ‘Paul’ …

[the first minute demonstrates what happens initially, the real action starts afterwards]

I’ve covered the project before, but has since made much progress. From the project website:

Drawing, is the human activity we investigate in the AIKON project. It has been practiced in every civilisation for at least the last 30,000 years. The project will be using computational and robotic technologies to explore the drawing activity. In particular the research focuses on face sketching. What can explain that for a non-draughtsman it proves so difficult to draw what they perceive so clearly, while an artist is able to do so sometimes just with a few lines, in a few seconds? Furthermore, how can an artist draw with an immediately recognisable style/manner?

The main objective of our investigation is to implement a computational system capable of simulating the various important processes involved in face sketching. The ensemble of processes to be simulated, including; the visual perception the subject and the sketch, the drawing gestures, the cognitive activity: reasoning. the influence of the years of training, etc., the inter-processes information flows. It is evident that due to knowledge and technological limitations the implementation of each process will remain coarse and approximate. The system implemented is expected to draw in its own style.

More about the project can be found here

prostheticknowledge:

Casio introduces 2D to 3D photo conversion service

Basically, it turns photographs into 3D photo sculptures … creating a very weird effect …

From Gizmodo:

Casio didn’t announce much at CES this year, but they are showing off a crazy 2D to 3D conversion service with results that literally make your photos leap off the page. They’re still putting out feelers as to whether or not consumers would actually want photos of their dog, cat, or even vacation shots converted to extruded 3D sculptures.

More Here

prostheticknowledge:

CV DAZZLE: Camouflage from Computer Vision

Created by Adam Harvey, this on-going project examines and experiments with creative ways to protect yourself from facial-recognition technology. I have posted about this before over a year ago, but it is interesting to see where the project has been going … hair and make-up could be the future hoodie …

CV Dazzle™ is camouflage from computer vision (CV). It is a form of expressive interference that combines makeup and hair styling (or other modifications) with face-detection thwarting designs. The name is derived from a type of camouflage used during WWI, called Dazzle, which was used to break apart the gestalt-image of warships, making it hard to discern their directionality, size, and orientation. Likewise, the goal of CV Dazzle is to break apart the gestalt of a face, or object, and make it undetectable to computer vision algorithms, in particular face detection.

Because face detection is the first step in automated facial recognition, CV Dazzle can be used in any environment where automated face recognition systems are in use, such as Google’s Picasa, Flickr, or Facebook.

More about the project can be read about here

prostheticknowledge:

A bionic prosthetic eye that speaks the language of your brain

This is both fascinating and interesting …

A scientist talks about their work on prosthetic sight, working on a technique which could potentially benefit not only other prosthetic technologies, but also understanding of the brain.

Sheila Nirenberg seems to have successfully managed to create a visual encoder placed on the optic nerve which can transcode visual stimuli into a signal into the brain. From Extreme Tech:

Now, reading the brain’s output (as in a prosthetic arm) is one thing, but feeding data into the brain is something else entirely — and understanding the signals that travel from the retina, through the optic nerve, to the brain is really about as bleeding edge as it gets. Nirenberg still used a brute force technique, though: By taking a complete animal eye and attaching electrodes to the optic nerve, she measured the electric pulses — the coded signal — that a viewed image makes. You might not know what the code means, but if a retina always generates the same electric code when looking at a lion, and a different code when looking at a bookcase, you can then work backwards to derive the retina’s actual encoding technique.

Perhaps even cooler, though, Nirenberg insists that this same technique — wiring up electrodes to our sense organs and brute forcing the encoding technique — could also be used to produce prosthetic ears, or noses, or limbs that can actually feel. Presumably, at some point, with enough data points under our belt, we might begin to unravel the human brain’s overarching communication codecs, too.

More Here

prostheticknowledge:

Mimicking the brain, in silicon (via MIT News) 

New computer chip models how neurons communicate with each other at synapses:
For decades, scientists have dreamed of building computer systems that  could replicate the human brain’s talent for learning new tasks. MIT  researchers have now taken a major step toward that goal by designing a  computer chip that mimics how the brain’s neurons adapt in response to  new information. This phenomenon, known as plasticity, is believed to  underlie many brain functions, including learning and memory.With  about 400 transistors, the silicon chip can simulate the activity of a  single brain synapse — a connection between two neurons that allows  information to flow from one to the other. The researchers anticipate  this chip will help neuroscientists learn much more about how the brain  works, and could also be used in neural prosthetic devices such as  artificial retinas, says Chi-Sang Poon, a principal research scientist  in the Harvard-MIT Division of Health Sciences and Technology.

More Here

prostheticknowledge:

Mimicking the brain, in silicon (via MIT News)

New computer chip models how neurons communicate with each other at synapses:

For decades, scientists have dreamed of building computer systems that could replicate the human brain’s talent for learning new tasks.

MIT researchers have now taken a major step toward that goal by designing a computer chip that mimics how the brain’s neurons adapt in response to new information. This phenomenon, known as plasticity, is believed to underlie many brain functions, including learning and memory.

With about 400 transistors, the silicon chip can simulate the activity of a single brain synapse — a connection between two neurons that allows information to flow from one to the other. The researchers anticipate this chip will help neuroscientists learn much more about how the brain works, and could also be used in neural prosthetic devices such as artificial retinas, says Chi-Sang Poon, a principal research scientist in the Harvard-MIT Division of Health Sciences and Technology.

More Here

dangerousdesigns:

Colors of my rainbow. #advertising 

dangerousdesigns:

Colors of my rainbow. #advertising