Another Blade Runner thread: The Portable Voight-Kampff Scanner

FIGHTING WITH THE DESIGN:

This is my current mock-up:

http://i.imgur.com/7gNN6KM.png

I'm trying to stick to a generalized scheme of blues, maybe a bit tan, with white and red for detail spots at the moment. I liked the look of the white grid of dots as a backdrop. I can make all sorts of tech-looking greeblies, but I think my real issue is I don't know what to fill the spaces up with that feels appropriate. For example, is a DNA helix silly? Probably not, given the context of the scanner, but it does feel a bit hokey.

I had a slightly more colorful take on it:

http://i.imgur.com/6BNGM5v.png

........ Looking at the Creativereview.co.uk article in particular, I think I have to back up and maybe try again from scratch to get the look right.

I think you are selling yourself short, the graphics are very good, my 2 cents would be: instead of a realistic looking eye, draw a "diagram" sort of like "telling a user how to read the the eye" and maybe a makers brand or logo, and that's it. The DNA is perfect. You could also base your UI on the mug shots of the replicants:

Screen.jpg

how does that sound?
 
Loving this thread!

What about something like this, so just the iris rather than the entire eye...

cdc8de0c0820312e66ee87429c9fb3ab.jpg


And maybe some basic mapping lines on the iris kinda like this...

18c495ffb611a2348a084ff74489cd6b.jpg


So an iris, with mapping lines, done in retro 80’s graphics. Ha ha. Just my 2 cents.


Sent from my iPhone using Tapatalk
 
Last edited:
I was also playing with the idea of keeping it simple and possibly doing an Iris scan/grid effect as a .GIF, .AMV, or .AVI video
for my MP4 player. What I like about using an MP4 player is I can add or delete any file at any time with the USB connection.


.


.
 
Last edited by a moderator:
A little more work, a little warmer? I'm going to take the suggestions you guys are giving into account and touch this a bit more tomorrow, but I'm just getting a sense of layout first.

L1ekHiB.png


I noticed a lot of the UI designs from 2049 have square grid layouts and connected corners, so I started restyling things accordingly. This actually feels a bit overcrowded - most of the 2049 designs I'm looking at are minimalist and have a lot of spacing involved, but I'm working in a tight space with this screen so I'm trying to get a good amount of information across without overcrowding. Haven't figured out what I want to do with some of the blank areas.

I drew a little bit of a border around things because I'm playing with the idea of manually 'drawing' a bit more of a backlight bleeding effect onto the screen. Will have to see how (and if) that works.

The eye design in particular here is not one I am married to, just one I'm using for now. Still quite interested in outside input on this.

Also, funny note: I was drawing a battery icon onto the thing, then came to the realization that actually... yeah, that could probably be functional. I am pretty sure the arduino can measure the voltage from the LiPo battery and display accordingly. Probably going to have to do that, just for the sake of it!
 
A little more work, a little warmer? I'm going to take the suggestions you guys are giving into account and touch this a bit more tomorrow, but I'm just getting a sense of layout first.

https://i.imgur.com/L1ekHiB.png

I noticed a lot of the UI designs from 2049 have square grid layouts and connected corners, so I started restyling things accordingly. This actually feels a bit overcrowded - most of the 2049 designs I'm looking at are minimalist and have a lot of spacing involved, but I'm working in a tight space with this screen so I'm trying to get a good amount of information across without overcrowding. Haven't figured out what I want to do with some of the blank areas.

I drew a little bit of a border around things because I'm playing with the idea of manually 'drawing' a bit more of a backlight bleeding effect onto the screen. Will have to see how (and if) that works.

The eye design in particular here is not one I am married to, just one I'm using for now. Still quite interested in outside input on this.

Also, funny note: I was drawing a battery icon onto the thing, then came to the realization that actually... yeah, that could probably be functional. I am pretty sure the arduino can measure the voltage from the LiPo battery and display accordingly. Probably going to have to do that, just for the sake of it!

This is definitely getting there!

One comment, maybe the “Warning: Replicant Detected” should be a larger font than the rest? In fact, I would think it would “take over” the graphics on screen. Kind of like this (not the red stripe, just using this to show the concept)...
a049dee8f0f5828ba3fbff9f2cc0f5c3.jpg


Of course if you went this route, you’d want it to animate over the background, not appear immediately. So, you open the device, the screen turns on, the graphics come up with a “scanning” text, then moments later the warning banner comes up over the top of the other graphics. The device could even make a “warning” sound at this point.

Too much to ask? [emoji12]


Sent from my iPhone using Tapatalk
 
This is definitely getting there!

One comment, maybe the “Warning: Replicant Detected” should be a larger font than the rest? In fact, I would think it would “take over” the graphics on screen. Kind of like this (not the red stripe, just using this to show the concept)...
https://uploads.tapatalk-cdn.com/20171107/a049dee8f0f5828ba3fbff9f2cc0f5c3.jpg

Of course if you went this route, you’d want it to animate over the background, not appear immediately. So, you open the device, the screen turns on, the graphics come up with a “scanning” text, then moments later the warning banner comes up over the top of the other graphics. The device could even make a “warning” sound at this point.

Too much to ask? [emoji12]


Sent from my iPhone using Tapatalk

I like what you're suggesting. The only possible problem I see is how the screen handles the pixel data; think of the drawing methods as a way of laying a new pixel overtop of an old one. I can draw a big red alert banner across the entire screen as a static element, no problem. If I try animating it, however, it can only grow - I can't have it feasibly slide down the screen from the top or something - because as the red pixels pass, the screen has no idea what data to replace in their wake. It'd leave a red trail as it moves around the screen.

You could theoretically solve this problem by putting the existing pixel data that will be covered by the alert banner into some kind of buffer or array - basically, storing it in memory - but each pixel takes about 2 bytes of memory to store in that fashion. That means to save the state of the full 240x320 display, we'd need 153600 bytes, which is well in excess of the ~30,000 we have to work with on these microcontrollers.

All of that even assumes that we have a function in our graphics library to read pixel data. The adafruit libraries (and most others) actually don't have anything like a getPixel() function to read that information - they're largely one-way communication and focused on pushing data to the screen to get it to display as fast as possible (so it doesn't look choppy).

So basically: I like the idea a lot, but I think it would be the 'end' of the animation cycle, because you couldn't 'un-draw' that element once it's on the screen easily.

It was pointed out to me that the screens in Blade Runner are largely not color displays. I think the major exception to this is probably in the Spinners (flying cars), but one thing that really stuck with me was the fact that the Baseline test readouts seem to be evaluating the subject with the same blue monochrome video feeds as everything else on the police chief's desk:



This was enough to convince me that having the white sclera and colored pupil was probably overkill and distracted too much from the design of the UI. I did a quick revision of the eye graphic to test a different idea:

cHBcP2h.png


This can be done with the same 3 byte arrays as the initial image, but actually has a lot more fidelity to me and reads cleaner as an "eye" without looking like it's being rendered by a Commodore 64. To wit:

6tA6znZ.png


And just to prove this can all work on a screen, I hard-coded this eye graphic quickly:



An interesting "bug" I've seen on a couple of occasions is that the screen will sometimes start up with inverted graphics colors. I really don't know why - speculating it might happen if a less-than-fresh LiPo battery is asked for too much power at the start of the initialization sequence. Honestly, if I can figure out what's causing it, I'd like to try and see if I can use it to force random graphical artifacts on the screen, but that may be asking for trouble.
 
Last edited:
I like what you're suggesting. The only possible problem I see is how the screen handles the pixel data; think of the drawing methods as a way of laying a new pixel overtop of an old one. I can draw a big red alert banner across the entire screen as a static element, no problem. If I try animating it, however, it can only grow - I can't have it feasibly slide down the screen from the top or something - because as the red pixels pass, the screen has no idea what data to replace in their wake. It'd leave a red trail as it moves around the screen.

You could theoretically solve this problem by putting the existing pixel data that will be covered by the alert banner into some kind of buffer or array - basically, storing it in memory - but each pixel takes about 2 bytes of memory to store in that fashion. That means to save the state of the full 240x320 display, we'd need 153600 bytes, which is well in excess of the ~30,000 we have to work with on these microcontrollers.

All of that even assumes that we have a function in our graphics library to read pixel data. The adafruit libraries (and most others) actually don't have anything like a getPixel() function to read that information - they're largely one-way communication and focused on pushing data to the screen to get it to display as fast as possible (so it doesn't look choppy).

So basically: I like the idea a lot, but I think it would be the 'end' of the animation cycle, because you couldn't 'un-draw' that element once it's on the screen easily.

It was pointed out to me that the screens in Blade Runner are largely not color displays. I think the major exception to this is probably in the Spinners (flying cars), but one thing that really stuck with me was the fact that the Baseline test readouts seem to be evaluating the subject with the same blue monochrome video feeds as everything else on the police chief's desk:

http://i.imgur.com/8GdXWqwl.png

This was enough to convince me that having the white sclera and colored pupil was probably overkill and distracted too much from the design of the UI. I did a quick revision of the eye graphic to test a different idea:

https://i.imgur.com/cHBcP2h.png

This can be done with the same 3 byte arrays as the initial image, but actually has a lot more fidelity to me and reads cleaner as an "eye" without looking like it's being rendered by a Commodore 64. To wit:

https://i.imgur.com/6tA6znZ.png

And just to prove this can all work on a screen, I hard-coded this eye graphic quickly:

http://i.imgur.com/YmfUIa5l.jpg?2

An interesting "bug" I've seen on a couple of occasions is that the screen will sometimes start up with inverted graphics colors. I really don't know why - speculating it might happen if a less-than-fresh LiPo battery is asked for too much power at the start of the initialization sequence. Honestly, if I can figure out what's causing it, I'd like to try and see if I can use it to force random graphical artifacts on the screen, but that may be asking for trouble.

Yeah the blue monochrome is cool. I don't see why the whole thing couldn't just be that.

Regarding the "warning" banner, I wouldn't use the solid red banner, that was just to illustrate a warning banner on top of the background. Regarding it animating, it wouldn't need to move around the screen, but maybe flashing on and off would be cool, if coding allows.

I still really like the iris idea rather than the full eye. The full eye picture does't do it for me and would look different depending on who was being scanned (e.g. a female eye vs a male eye). But if it was just an iris, it could be anyone, male or female. Wouldn't even matter what colour their eyes actually were as the iris would just appear monochrome blue on the screen anyway. So you could "scan" your friends and it would look legit every time. Ha ha.
 
I still really like the iris idea rather than the full eye. The full eye picture does't do it for me and would look different depending on who was being scanned (e.g. a female eye vs a male eye). But if it was just an iris, it could be anyone, male or female. Wouldn't even matter what colour their eyes actually were as the iris would just appear monochrome blue on the screen anyway. So you could "scan" your friends and it would look legit every time. Ha ha.

That's a fair point. Hm. I did want to wait and see if I had memory to spare so that I could implement a few different sets of eyes and have the device pick a random one per scan to show on the readout. I also played around with the idea (again, memory allowing) of having a full eye shot, followed by a zoom animation that showed a much more abstracted organic analysis, like what they do when they're testing K's baseline.

Honestly, I may have had this wrong. I think part of the reason I've been using the full eye is that K asks Morton (Bautista's character) to "Look up and to the left" while pulling the device out, so I assumed the visual I would use would be something along those lines. However... the Voight-Kampff in the original Blade Runner very obviously focuses on the iris and pupil during its analysis.



Moreover, I'm pretty certain when K is having his baseline tests done he's looking directly into the camera, and the CreativeReview.co.uk article suggests:

The baseline scan was challenging (shown above and below) because it was the new updated version of the ‘Voight-Kampff’ test. We designed the new system – a brainscan technology that looks at the brain through the optic nerve.

So... yeah. OK. I'm willing to buy the idea that maybe it should be focused tighter on the eye, rather than the full eye. I'm going to have to look into the details and background of blade runner a bit more to try and figure out exactly what in the eye they're looking for when they're scanning to identify replicants. I always imagined it was just a serial number or something printed somewhere in the eye, similar to how K found numbers on the bones during the medical analysis scene.

Edit: You know, this seems wrong. Whoever said that in the article is describing it as a "new Voight-Kampff test", but the VK test is to identify the replicants, whereas the baseline test is more of a mental health analysis? K gets the information he needs out of Morton with the scanner and does so while Morton's eyes are basically rolled back up in his head - it's not a pupillary thing.

Edit edit: My hunch is right! You can totally see a number printed on the sclera here.



Something like this, maybe? With fingers holding eyes open, or a tighter focus that doesn't show it.

oDmgn9e.png
UXuVMYP.png
 
Last edited:
For what its worth, here is a display they were using at the SDCC Blade Runner experience thing when they "scanned" people coming in. Gives a bit more certainty to the blue overlay.

blade-runner-2049-svcc-20170026.jpg
 
Ah, good spotting. So the eye scanner is used to scan for a number in the base of the eye, not the iris itself, correct? Bear with me, I'm yet to see the new BR. :$

That's a fair point. Hm. I did want to wait and see if I had memory to spare so that I could implement a few different sets of eyes and have the device pick a random one per scan to show on the readout. I also played around with the idea (again, memory allowing) of having a full eye shot, followed by a zoom animation that showed a much more abstracted organic analysis, like what they do when they're testing K's baseline.

Honestly, I may have had this wrong. I think part of the reason I've been using the full eye is that K asks Morton (Bautista's character) to "Look up and to the left" while pulling the device out, so I assumed the visual I would use would be something along those lines. However... the Voight-Kampff in the original Blade Runner very obviously focuses on the iris and pupil during its analysis.

http://i.imgur.com/AgWiGNcl.png

Moreover, I'm pretty certain when K is having his baseline tests done he's looking directly into the camera, and the CreativeReview.co.uk article suggests:



So... yeah. OK. I'm willing to buy the idea that maybe it should be focused tighter on the eye, rather than the full eye. I'm going to have to look into the details and background of blade runner a bit more to try and figure out exactly what in the eye they're looking for when they're scanning to identify replicants. I always imagined it was just a serial number or something printed somewhere in the eye, similar to how K found numbers on the bones during the medical analysis scene.

Edit: You know, this seems wrong. Whoever said that in the article is describing it as a "new Voight-Kampff test", but the VK test is to identify the replicants, whereas the baseline test is more of a mental health analysis? K gets the information he needs out of Morton with the scanner and does so while Morton's eyes are basically rolled back up in his head - it's not a pupillary thing.

Edit edit: My hunch is right! You can totally see a number printed on the sclera here.

http://i.imgur.com/78A6J5rl.png

Something like this, maybe? With fingers holding eyes open, or a tighter focus that doesn't show it.

https://i.imgur.com/oDmgn9e.png https://i.imgur.com/UXuVMYP.png
 
Or even closer again, so you just really see the number, with say the corner of the iris in frame...

Close up.png
 
I like what you're suggesting. The only possible problem I see is how the screen handles the pixel data; think of the drawing methods as a way of laying a new pixel overtop of an old one. I can draw a big red alert banner across the entire screen as a static element, no problem. If I try animating it, however, it can only grow - I can't have it feasibly slide down the screen from the top or something - because as the red pixels pass, the screen has no idea what data to replace in their wake. It'd leave a red trail as it moves around the screen.

You could theoretically solve this problem by putting the existing pixel data that will be covered by the alert banner into some kind of buffer or array - basically, storing it in memory - but each pixel takes about 2 bytes of memory to store in that fashion. That means to save the state of the full 240x320 display, we'd need 153600 bytes, which is well in excess of the ~30,000 we have to work with on these microcontrollers.

All of that even assumes that we have a function in our graphics library to read pixel data. The adafruit libraries (and most others) actually don't have anything like a getPixel() function to read that information - they're largely one-way communication and focused on pushing data to the screen to get it to display as fast as possible (so it doesn't look choppy).

So basically: I like the idea a lot, but I think it would be the 'end' of the animation cycle, because you couldn't 'un-draw' that element once it's on the screen easily.

It was pointed out to me that the screens in Blade Runner are largely not color displays. I think the major exception to this is probably in the Spinners (flying cars), but one thing that really stuck with me was the fact that the Baseline test readouts seem to be evaluating the subject with the same blue monochrome video feeds as everything else on the police chief's desk:

[url]http://i.imgur.com/8GdXWqwl.png[/url]

This was enough to convince me that having the white sclera and colored pupil was probably overkill and distracted too much from the design of the UI. I did a quick revision of the eye graphic to test a different idea:

https://i.imgur.com/cHBcP2h.png

This can be done with the same 3 byte arrays as the initial image, but actually has a lot more fidelity to me and reads cleaner as an "eye" without looking like it's being rendered by a Commodore 64. To wit:

https://i.imgur.com/6tA6znZ.png

And just to prove this can all work on a screen, I hard-coded this eye graphic quickly:

[url]http://i.imgur.com/YmfUIa5l.jpg?2[/url]

An interesting "bug" I've seen on a couple of occasions is that the screen will sometimes start up with inverted graphics colors. I really don't know why - speculating it might happen if a less-than-fresh LiPo battery is asked for too much power at the start of the initialization sequence. Honestly, if I can figure out what's causing it, I'd like to try and see if I can use it to force random graphical artifacts on the screen, but that may be asking for trouble.
Saw the movie again and paid extra attention to the colours of the displays just for you lol


You're right, pretty much all of the displays are a shade of blue. Basically as if a very blue filter was put onto everything. And usually if there's another colour, it's either red or yellow.

There also seems to be a fair of movement on the screens. As I understand animated stuff is sorta out of the picture due to storage limitations. But maybe in the future or if you find away around the space issue, it would be a cool addition


Also something that I thought of earlier. I'm not sure if you said if this type of thing was doable at all, but it would maybe be cool if there were two images for the screen. One for if "a replicant was detected" and another for if it's just a human. That way if you activated the scanner, it would randomize which image it would display. Just a thought, it could be kind of a fun functionality to scan peoples eyes to see "if they're a replicant or not"
 
Thanks @Kylash for the better pic. So maybe a shot size like this...

Close up 2.jpg

(Forgive the crudeness of this design, I did it in Mac Preview! Lol. And I'm no designer.)

Also note, when you zoom in on that pic, the ID appears to also be "branded" to the inside of the lower eyelid.
 
This thread is more than 5 years old.

Your message may be considered spam for the following reasons:

  1. This thread hasn't been active in some time. A new post in this thread might not contribute constructively to this discussion after so long.
If you wish to reply despite these issues, check the box below before replying.
Be aware that malicious compliance may result in more severe penalties.
Back
Top