Blade Runner Esper Photo Analysis scene?

I always thought of it as: the camera was 3-D, & also very high resolution, say 100-gigapixels (that oughta be about right for heavily-Japanese-influenced-L.A.-2019, right?). And the ESPer had "predictive intelligence" (<--just made that up. Copyright 2012, Nexus6), which allowed it to examine the coloration, shading, shadowing, lighting, & reflections coming from each pixel. Then as the user "engaged" the 3-D aspect of the photo, the ESPer would take all the data it had gathered from the visible pixels, & build the portions of the photo that had been obscured, using the results of the gathered data to make an educated guess as to what was supposed to be there. So, in essence, the ESPer is saying, <THIS> is the content that would be causing the surrounding [visible] pixels to exhibit the attributes I've analyzed.

That would also explain why the woman in the photo didn't look like Zhora.



BUUTTTTT, it doesn't explain why the hard-copy was a different angle & looked exactly like Zhora. :wacko
 
BUUTTTTT, it doesn't explain why the hard-copy was a different angle & looked exactly like Zhora. :wacko

Oh that's because the Esper is connected to the Blade Runner Unit database- it interpolated the information and printed out a better view of the Replicant Deckard was looking for. ;)


Kevin
 
Gene, great post, thanks for all that info!



Oh that's because the Esper is connected to the Blade Runner Unit database- it interpolated the information and printed out a better view of the Replicant Deckard was looking for. ;)


Kevin


That actually makes a lot of sense, I never considered that! :)
 
Thanks a lot for posting all those awesome pictures, Gene!! :D

I also thought that the image itself had data encoded on it so that it was three-dimensional. I always thought the black border with the red text on the image itself was like a representation of that fact.

Correct me if I'm wrong, but aren't those blue "flash frames" in the sequence kodaliths aka the negative cells they used for the backlit animation for the circuitry glow in Tron? Whenever a frame "flashes" all the blacks in the image seem to glow exactly like the backlit animation from Tron.
 
The Esper is nothing compared to Stallone in Get Carter. He's watching a video and some guy has his back to the camera. So he freezes the tape and puts a mirror against the TV screen. And can now see the guys face! YEAH RIGHT!!!
 
Does anyone know if that mirror in Adam Savage's man cave (visible on Tested) is the original Blade Runner prop? Every time I see that circular mirror, I'm convinced that is the prop used in Leon's apartment for the Esper scene.
 
I think it is similar tech (I mean it would be considered similar to the tech) that those focus later cameras that they recently came out with use. They store all visible field data that allows you to come back later and change the focal point of most objects in the image. So say the photo has an apple 5 feet from the camera, a banana 10 feet away and an orange 20 feet from the camera. You don't focus the camera, but just take the photo. Later on you can decide to make either the apple, the banana, or the orange in focus...or all three.

I like to think that the Esper works like this in a way recording all light within the the frame of the photo, and then the Esper is able to rebuild unphotographed by direct sight line objects within (or just outside of) the sight line by deducing the reflected/refracted light of said off screen objects recorded within the field of sight of the photograph.

Thus, I guess in the BR reality, all or most photos are this new kind of hi-data photo, and the Esper itself is the supercomputing rebuilder/manipulator of lost unseen not quite photo data.
 
Last edited:
I can't recall exactly but there's some high science (think string theory,alternate dimensions,time travel) that says glass actually records anything it may reflect,so if they had that worked out in BR's universe I suppose Deckard could zoom in on the mirror and manipulate it to see anything in the room? from any time?
 
What exactly it was about this photo that made Deckard so interested in it, I have no idea.

I've just watched the sequence again, and perhaps this will help.

One of the things the Esper/Dekkard focuses on is two drinking glasses. According to gumshoe narrative tradition, two drinking glasses left together indicates that more than one person has been in the room ( and drinking with the occupant). From this point, Dekkard is looking for another person in the room, possibly in the bedroom area (because, they're drinking together, so....?) If the Esper has psi capabilities (as has been suggested by others in this thread), the machine would be looking for that area and that person too.(If we run with the Dekkard=replicant theory this interaction becomes even more possible, but anyway.)

My other theory is - if there is a machine that finds things hidden in photographs, might there not also be a camera that hides things in photographs?

Dekkard selects a picture that seems to be of a nondescript room - why would anyone keep that photograph? Why would they regard it as important? (Roy: 'Did you get your precious photos?') Leon has had an intimate encounter with Zhora that he wants to remember (and keep secret), so he photographs her while she is sleeping and hides her - using the stealth technology in the device he used to take the photo - and makes the picture look banal. And here we are now, close to November 2019, and we can all take a photograph and manipulate it with our hand-held communication device, and then transmit it....

I saw this film at the time of the original release (Yes, with the voice over and the "happy"ending) and the technology portrayed at the time (1982) was beyond belief, but it still served the story. The whole "photographs" business is not so important to the plot, but integral to the story, showing Leon and Thora as 'human', with human desires for memory, privacy, secrets, their own lives. That's the great question behind Blade Runner - how human are machines? What happens when humans do inhuman things - do they become like machines? Anyway, everyone was talking about the technology on this thread, so I though I'd show that human intuition is useful too, and if you come home and see two used drinking glasses on the bench, think on.
 
Well in Blade Runner 2049...
The photos that K has of the tree had some animation to them and were clearly taken from his drone's spatial mapping of the area. Maybe photos contain more spatial information in them than is visible. Maybe enough information is imbedded in the hard copy so that an analyzer might literally peek around corners in a photo.
 
The angle at which he's looking towards the mirror starts changing. He sees things that weren't previously visible, and not just small details, entire people, rooms even. He manages to somehow locate Zora within the photo. It wasn't like Deckard simply zoomed in on her reflection within the mirror, he was definitely rotating his view.

My question is, is this technology ever explained?

Any theories/insight would be appreciated!

What Deckard is doing might be possible. He is forensically analyzing the photo he found, but how is he doing it?

Two ideas I can think of are:

Scenario 1) (more likely) In the future, all cameras have additional light sources (quick laser pulses firing) to illuminate most areas hidden within the scene, and perhaps a plenoptic camera. This additional data (of objects hidden from view) is embedded in the traditional picture as metadata. So Deckard can now navigate through this scene and render the additional data on the hidden objects using the computer.

Scenario 2) (less likely) In lieu of having additional light sources and/or plenoptic lenses, one can use machine learning.
In pretty much all photos, there is more data than just a pretty picture. There are light sources from the environment (and traditional flashes from the camera). Perhaps one can infer the approximate location, intensity, and nature of the light sources, say, based upon the shadows and flaring in the picture. From there, one can trace the light rays around the room to see what objects, not present in the picture, actually influence the resulting light intensity in the picture. From there, one can work backwards to make an educated guess as to the shapes and optical properties of those hidden objects. Perhaps this can be accomplished by some artificial intelligence running on a crazy fast computer.

In the 2) scenario, one can assume the Esper machine employs machine learning/ray tracing/etc. to infer what the missing data is in the photograph, as it's based upon the existing light field already in the picture. The computer AI fills in this missing or blurry parts of the photo, approximating data in real time, as Deckard navigates through the scene.
Of course, the deeper he dives into the picture, the more approximate and uncertain the hidden objects will be due to the sheer amount of assumptions and computation needed.

Lo and behold, in either case, Deckard actually finds some actionable evidence from data hidden to the human mind, but present in the picture.
 
I always thought it was technology more advanced that we have now, a bit like Lytro's cinema camera technology


J
 
This thread is more than 3 years old.

Your message may be considered spam for the following reasons:

  1. This thread hasn't been active in some time. A new post in this thread might not contribute constructively to this discussion after so long.
If you wish to reply despite these issues, check the box below before replying.
Be aware that malicious compliance may result in more severe penalties.
Back
Top