Thursday, July 4, 2024

More Advances in Vision Tech


Nanostructured flat lens uses machine learning to 'see' more clearly, while using less power
Jan 2024, phys.org

Meta Imager - a lense that processes the image on the front-end, so it can be thinner and process more efficiently.

Vanderbilt University: Hanyu Zheng et al, Multichannel meta-imagers for accelerating machine vision, Nature Nanotechnology (2024). DOI: 10.1038/s41565-023-01557-2

Image credit: AI Art - What Does Sadness Look Like - 2022


Lab-grown retinas explain why people see colors dogs can't
Jan 2024, phys.org

Just Retinal Organoids - By tweaking the cellular properties of retinal organoids, the research team found that a molecule called retinoic acid (an offshoot of vitamin A) determines whether a cone will specialize in sensing red or green light. Only humans with normal vision and closely related primates develop the red sensor.

via Johns Hopkins University: Retinoic acid signaling regulates spatiotemporal specification of human green and red cones, PLoS Biology (2024). DOI: 10.1371/journal.pbio.3002464. 


A camera-based anti-facial recognition technique
Jan 2024, phys.org

Anti-Facial Recognition - obfuscating, synthesizing or changing images to increase the privacy of users

CamPro - whereas most techniques use post-processing to modify images after they were captured, this    technique works at the camera sensor level to produce images that can protect users' facial privacy without influencing other applications

Using a very simplified explanation - the team manipulated certain parameters at the signal processing layer of the camera, right after the sensor, to take away personal identifying information from the picture, but while leaving the other information like person detection and pose estimation. It works by adjusting the existing processing parameters, but without requiring redesigning the camera, and it's called "privacy-preserving by birth" (why not by design?).

What's more is that because the resulting image may not be "human readable", they put some code in there to make the face back into something that humans can see pretty good, yet doesn't have the data that a computer needs to see it. This is like how a stop sign with a picture of a banana on it can brick your self-driving car; it looks like a banana to you, or a mess of dots, or whatever, but to the computer it looks like something completely different and in fact makes the stop sign itself invisible.

via USSLAB at Zhejiang University: Wenjun Zhu et al, CamPro: Camera-based Anti-Facial Recognition, arXiv (2024). DOI: 10.48550/arxiv.2401.00151


Science fiction meets reality as researchers develop techniques to overcome obstructed views
Feb 2024, phys.org

Using an ordinary digital camera - "We're turning ordinary surfaces into mirrors to reveal regions, objects, and rooms that are outside our line of vision"

via University of South Florida: Robinson Czajkowski et al, Two-edge-resolved three-dimensional non-line-of-sight imaging with an ordinary camera, Nature Communications (2024). DOI: 10.1038/s41467-024-45397-7

AI Art - Vase of Golden Threads in Impossible Geometries by Escher and Lalique - 2022

Using 3D printing to make artificial eyeballs more quickly and accurately
Feb 2024, phys.org

Artificial eyeballs thought you knew

via Fraunhofer Institute for Computer Graphics Research at Technical University Darmstadt, NIHR Biomedical Research Centre for Ophthalmology at Moorfields Eye Hospital, UCL Institute of Ophthalmology: Johann Reinhard et al, Automatic data-driven design and 3D printing of custom ocular prostheses, Nature Communications (2024). DOI: 10.1038/s41467-024-45345-5


Pushing back the limits of optical imaging by processing trillions of frames per second
Mar 2024, phys.org

Femtophotography - trillions of frames per second, used for femtosecond laser ablation, shock-wave interaction with living cells, and optical chaos 

Optical Chaos - chaos generated by laser instabilities using different schemes in semiconductor and fiber lasers

via Énergie Matériaux Télécommunications Research Centre at INRS Institut National de la Recherche Scientifique, the Advanced Laser Light Source Laboratory, Institut Jean Lamour at the Université de Lorraine, and Huazhong University of Science and Technology: Jingdan Liu et al, Swept coded aperture real-time femtophotography, Nature Communications (2024). DOI: 10.1038/s41467-024-45820-z

 
AI-powered 'sonar' on smartglasses tracks gaze, facial expressions
Apr 2024, phys.org

Both use speakers and microphones mounted on an eyeglass frame to bounce inaudible soundwaves off the face and pick up reflected signals caused by face and eye movements. One device, GazeTrak, is the first eye-tracking system that relies on acoustic signals. The second, EyeEcho, is the first eyeglass-based system to continuously and accurately detect facial expressions and recreate them through an avatar in real-time.

via Cornell Smart Computer Interfaces for Future Interactions: Ke Li et al, GazeTrak: Exploring Acoustic-based Eye Tracking on a Glass Frame, arXiv (2024). DOI: 10.48550/arxiv.2402.14634


Advance in light-based computing shows capabilities for future smart cameras
Apr 2024, phys.org

A tiny array of transparent pixels could produce a fast, broadband, nonlinear response from low-power ambient light.

The thinness of the material makes it transparent, while it retains qualities that enable incoming photons to efficiently regulate electrical conductivity. The research team coupled the 2D semiconductor with a layer of liquid crystal and made it functional with an array of electrodes. The result is a smart filter comprising 10,000 pixels, each able to selectively and quickly darken in a nonlinear way when exposed to broadband ambient light.

"An inexpensive device measuring a couple of centimeters could make a low-powered camera work like a super-resolution camera"

via California NanoSystems Institute at UCLA: Dehui Zhang et al, Broadband nonlinear modulation of incoherent light using a transparent optoelectronic neuron array, Nature Communications (2024). DOI: 10.1038/s41467-024-46387-5

No comments:

Post a Comment