So cool!
FIGHTING WITH THE DESIGN:
This is my current mock-up:
http://i.imgur.com/7gNN6KM.png
I'm trying to stick to a generalized scheme of blues, maybe a bit tan, with white and red for detail spots at the moment. I liked the look of the white grid of dots as a backdrop. I can make all sorts of tech-looking greeblies, but I think my real issue is I don't know what to fill the spaces up with that feels appropriate. For example, is a DNA helix silly? Probably not, given the context of the scanner, but it does feel a bit hokey.
I had a slightly more colorful take on it:
http://i.imgur.com/6BNGM5v.png
........ Looking at the Creativereview.co.uk article in particular, I think I have to back up and maybe try again from scratch to get the look right.
A little more work, a little warmer? I'm going to take the suggestions you guys are giving into account and touch this a bit more tomorrow, but I'm just getting a sense of layout first.
https://i.imgur.com/L1ekHiB.png
I noticed a lot of the UI designs from 2049 have square grid layouts and connected corners, so I started restyling things accordingly. This actually feels a bit overcrowded - most of the 2049 designs I'm looking at are minimalist and have a lot of spacing involved, but I'm working in a tight space with this screen so I'm trying to get a good amount of information across without overcrowding. Haven't figured out what I want to do with some of the blank areas.
I drew a little bit of a border around things because I'm playing with the idea of manually 'drawing' a bit more of a backlight bleeding effect onto the screen. Will have to see how (and if) that works.
The eye design in particular here is not one I am married to, just one I'm using for now. Still quite interested in outside input on this.
Also, funny note: I was drawing a battery icon onto the thing, then came to the realization that actually... yeah, that could probably be functional. I am pretty sure the arduino can measure the voltage from the LiPo battery and display accordingly. Probably going to have to do that, just for the sake of it!
This is definitely getting there!
One comment, maybe the “Warning: Replicant Detected” should be a larger font than the rest? In fact, I would think it would “take over” the graphics on screen. Kind of like this (not the red stripe, just using this to show the concept)...
https://uploads.tapatalk-cdn.com/20171107/a049dee8f0f5828ba3fbff9f2cc0f5c3.jpg
Of course if you went this route, you’d want it to animate over the background, not appear immediately. So, you open the device, the screen turns on, the graphics come up with a “scanning” text, then moments later the warning banner comes up over the top of the other graphics. The device could even make a “warning” sound at this point.
Too much to ask? [emoji12]
Sent from my iPhone using Tapatalk
I like what you're suggesting. The only possible problem I see is how the screen handles the pixel data; think of the drawing methods as a way of laying a new pixel overtop of an old one. I can draw a big red alert banner across the entire screen as a static element, no problem. If I try animating it, however, it can only grow - I can't have it feasibly slide down the screen from the top or something - because as the red pixels pass, the screen has no idea what data to replace in their wake. It'd leave a red trail as it moves around the screen.
You could theoretically solve this problem by putting the existing pixel data that will be covered by the alert banner into some kind of buffer or array - basically, storing it in memory - but each pixel takes about 2 bytes of memory to store in that fashion. That means to save the state of the full 240x320 display, we'd need 153600 bytes, which is well in excess of the ~30,000 we have to work with on these microcontrollers.
All of that even assumes that we have a function in our graphics library to read pixel data. The adafruit libraries (and most others) actually don't have anything like a getPixel() function to read that information - they're largely one-way communication and focused on pushing data to the screen to get it to display as fast as possible (so it doesn't look choppy).
So basically: I like the idea a lot, but I think it would be the 'end' of the animation cycle, because you couldn't 'un-draw' that element once it's on the screen easily.
It was pointed out to me that the screens in Blade Runner are largely not color displays. I think the major exception to this is probably in the Spinners (flying cars), but one thing that really stuck with me was the fact that the Baseline test readouts seem to be evaluating the subject with the same blue monochrome video feeds as everything else on the police chief's desk:
http://i.imgur.com/8GdXWqwl.png
This was enough to convince me that having the white sclera and colored pupil was probably overkill and distracted too much from the design of the UI. I did a quick revision of the eye graphic to test a different idea:
https://i.imgur.com/cHBcP2h.png
This can be done with the same 3 byte arrays as the initial image, but actually has a lot more fidelity to me and reads cleaner as an "eye" without looking like it's being rendered by a Commodore 64. To wit:
https://i.imgur.com/6tA6znZ.png
And just to prove this can all work on a screen, I hard-coded this eye graphic quickly:
http://i.imgur.com/YmfUIa5l.jpg?2
An interesting "bug" I've seen on a couple of occasions is that the screen will sometimes start up with inverted graphics colors. I really don't know why - speculating it might happen if a less-than-fresh LiPo battery is asked for too much power at the start of the initialization sequence. Honestly, if I can figure out what's causing it, I'd like to try and see if I can use it to force random graphical artifacts on the screen, but that may be asking for trouble.
I still really like the iris idea rather than the full eye. The full eye picture does't do it for me and would look different depending on who was being scanned (e.g. a female eye vs a male eye). But if it was just an iris, it could be anyone, male or female. Wouldn't even matter what colour their eyes actually were as the iris would just appear monochrome blue on the screen anyway. So you could "scan" your friends and it would look legit every time. Ha ha.
The baseline scan was challenging (shown above and below) because it was the new updated version of the ‘Voight-Kampff’ test. We designed the new system – a brainscan technology that looks at the brain through the optic nerve.
That's a fair point. Hm. I did want to wait and see if I had memory to spare so that I could implement a few different sets of eyes and have the device pick a random one per scan to show on the readout. I also played around with the idea (again, memory allowing) of having a full eye shot, followed by a zoom animation that showed a much more abstracted organic analysis, like what they do when they're testing K's baseline.
Honestly, I may have had this wrong. I think part of the reason I've been using the full eye is that K asks Morton (Bautista's character) to "Look up and to the left" while pulling the device out, so I assumed the visual I would use would be something along those lines. However... the Voight-Kampff in the original Blade Runner very obviously focuses on the iris and pupil during its analysis.
http://i.imgur.com/AgWiGNcl.png
Moreover, I'm pretty certain when K is having his baseline tests done he's looking directly into the camera, and the CreativeReview.co.uk article suggests:
So... yeah. OK. I'm willing to buy the idea that maybe it should be focused tighter on the eye, rather than the full eye. I'm going to have to look into the details and background of blade runner a bit more to try and figure out exactly what in the eye they're looking for when they're scanning to identify replicants. I always imagined it was just a serial number or something printed somewhere in the eye, similar to how K found numbers on the bones during the medical analysis scene.
Edit: You know, this seems wrong. Whoever said that in the article is describing it as a "new Voight-Kampff test", but the VK test is to identify the replicants, whereas the baseline test is more of a mental health analysis? K gets the information he needs out of Morton with the scanner and does so while Morton's eyes are basically rolled back up in his head - it's not a pupillary thing.
Edit edit: My hunch is right! You can totally see a number printed on the sclera here.
http://i.imgur.com/78A6J5rl.png
Something like this, maybe? With fingers holding eyes open, or a tighter focus that doesn't show it.
https://i.imgur.com/oDmgn9e.png https://i.imgur.com/UXuVMYP.png
Saw the movie again and paid extra attention to the colours of the displays just for you lolI like what you're suggesting. The only possible problem I see is how the screen handles the pixel data; think of the drawing methods as a way of laying a new pixel overtop of an old one. I can draw a big red alert banner across the entire screen as a static element, no problem. If I try animating it, however, it can only grow - I can't have it feasibly slide down the screen from the top or something - because as the red pixels pass, the screen has no idea what data to replace in their wake. It'd leave a red trail as it moves around the screen.
You could theoretically solve this problem by putting the existing pixel data that will be covered by the alert banner into some kind of buffer or array - basically, storing it in memory - but each pixel takes about 2 bytes of memory to store in that fashion. That means to save the state of the full 240x320 display, we'd need 153600 bytes, which is well in excess of the ~30,000 we have to work with on these microcontrollers.
All of that even assumes that we have a function in our graphics library to read pixel data. The adafruit libraries (and most others) actually don't have anything like a getPixel() function to read that information - they're largely one-way communication and focused on pushing data to the screen to get it to display as fast as possible (so it doesn't look choppy).
So basically: I like the idea a lot, but I think it would be the 'end' of the animation cycle, because you couldn't 'un-draw' that element once it's on the screen easily.
It was pointed out to me that the screens in Blade Runner are largely not color displays. I think the major exception to this is probably in the Spinners (flying cars), but one thing that really stuck with me was the fact that the Baseline test readouts seem to be evaluating the subject with the same blue monochrome video feeds as everything else on the police chief's desk:
[url]http://i.imgur.com/8GdXWqwl.png[/url]
This was enough to convince me that having the white sclera and colored pupil was probably overkill and distracted too much from the design of the UI. I did a quick revision of the eye graphic to test a different idea:
https://i.imgur.com/cHBcP2h.png
This can be done with the same 3 byte arrays as the initial image, but actually has a lot more fidelity to me and reads cleaner as an "eye" without looking like it's being rendered by a Commodore 64. To wit:
https://i.imgur.com/6tA6znZ.png
And just to prove this can all work on a screen, I hard-coded this eye graphic quickly:
[url]http://i.imgur.com/YmfUIa5l.jpg?2[/url]
An interesting "bug" I've seen on a couple of occasions is that the screen will sometimes start up with inverted graphics colors. I really don't know why - speculating it might happen if a less-than-fresh LiPo battery is asked for too much power at the start of the initialization sequence. Honestly, if I can figure out what's causing it, I'd like to try and see if I can use it to force random graphical artifacts on the screen, but that may be asking for trouble.