Another Blade Runner thread: The Portable Voight-Kampff Scanner

Hi Ein,
Will the hollow body of the scanner allow for a "spring style" pop up (automatic) or just manual pull up?
I'm currently trying to make an effort to design mine for the auto. pop-up with all the electronics and battery
I'm hoping to jam in there. :)

.
 
The last week or so has been hectic as hell at work. I'm a family law attorney, and it is almost inevitable following Thanksgiving that I end up with a lot of people who have just had to deal with family things that now want to talk to a lawyer. Sorry for not updating more regularly.

That said, I've gotten a lot done, including what I think are the bulk of the animations. I'll produce a better write-up about things tomorrow, but for now, I'll simply share this video, which does a pretty good job of summarizing where things are at right now:

 
Last edited by a moderator:
If you’re as good a lawyer as an electronics, digital graphics, and firmware designer, I would never want you representing anyone I take to court. Did you do a career change or something?



Sent from my iPhone using Tapatalk
 
The updates never cease to amaze.

Just throwing this out there, but are you set on the audible scanning sound you're currently using? While I think it sounds fine I also wonder if there is a more Blade Runner-esque sound effect that could be achieved. As it stands it certainly has a 80's quality to it, but perhaps a touch of a modern/bassy element mixed in could bring it closer to the BR 2049 aesthetic. Just my 2 cents anyway.
 
The updates never cease to amaze.

Just throwing this out there, but are you set on the audible scanning sound you're currently using? While I think it sounds fine I also wonder if there is a more Blade Runner-esque sound effect that could be achieved. As it stands it certainly has a 80's quality to it, but perhaps a touch of a modern/bassy element mixed in could bring it closer to the BR 2049 aesthetic. Just my 2 cents anyway.

Yeah I'm in agreement here. The stuff shown in this demo video is incredible, and i'd queue to get one of these.

BUT from memory, the scanning sound was different in the movie. It was more of a motorized whir that escalates, and a sequence of electronic beeps/chirps when the scan is successful. I'm sure someone will be smart enough to be able to get a rip of the accurate reference audio.
 
Holy smokes! Can't wait to have one of these things in my hands. I love the level of detail you've packed into this project!
 
You had to put the bar higher than your last update didn't ya? Well, YOU HAVE SUCCEEDED!!!:devil:cool How cool is that? So far, you win!:)
 
If you’re as good a lawyer as an electronics, digital graphics, and firmware designer, I would never want you representing anyone I take to court. Did you do a career change or something?

You are being way too kind. I had a computer science class in high school in Ye Olde Days where they taught me a bit of Java, and I've just been leaning on the concepts from that while I've been picking up C++ and the weird features of the Arduino / Teensy IDE. Once you understand the basics of programming a lot of the process comes down to per-language quirks that help you get what you want done. If I'm being honest, I enjoy this kind of work far more than the law, but I'm still just an amateur and I can guarantee that someone with a proper computer science background reviewing my code would probably be horrified by rules I don't even know I'm breaking. If it works, it works, though!

Just throwing this out there, but are you set on the audible scanning sound you're currently using? While I think it sounds fine I also wonder if there is a more Blade Runner-esque sound effect that could be achieved. As it stands it certainly has a 80's quality to it, but perhaps a touch of a modern/bassy element mixed in could bring it closer to the BR 2049 aesthetic. Just my 2 cents anyway.

Yeah I'm in agreement here. The stuff shown in this demo video is incredible, and i'd queue to get one of these.

BUT from memory, the scanning sound was different in the movie. It was more of a motorized whir that escalates, and a sequence of electronic beeps/chirps when the scan is successful. I'm sure someone will be smart enough to be able to get a rip of the accurate reference audio.

OK. This is a totally fair critique, but I'm not sure which direction to go to improve it yet. Lets talk about this bit in particular.

The "scanning" audio is something I made up outright. The Teensy can play audio files through the speaker if they are formatted correctly and stored in flash memory, but they take up a ton of space - comparable to bitmap graphics - and would be somewhat poor quality without a dedicated sound board. The way I've been generating sounds with this setup is the same as what you could do with a piezo buzzer, basically - just tell it to oscillate one of the output pins at a certain frequency for a certain duration. For example, the current scanning sound effect code is a loop that simply calls a line:

tone(TONE_PIN, random(3000,8000), 20);

This is basically an instruction to generate a random frequency between 3000hz and 8000hz and push it out to the speaker pin for 20 milliseconds. Not fancy on its own, but when you are hitting that line in a quick loop it makes that tinkling cadence that currently serves as the scanning audio. It actually looks really neat on a spectrum analyzer, because you can see the individual points representing the random frequencies it's generating:

SdJ4s83l.jpg


Likewise, the startup whine is simply a climbing frequency that increments throughout a loop and ends up being a fairly good logarithmic curve:

C7q4P4ml.jpg


All this is to say that whatever audio I end up using will probably have to be generated in this fashion, out of basic frequencies. Asking for it to produce something like a "bass" sound is a challenge. According to the datasheet, the speaker itself has an effective range in which it can generate audio from as low as 200Hz to as high as 20,000Hz, but my observations are that lower Hz values tend to be quieter. Just as a quick test I knocked the frequency range down by an order of magnitude, to 100Hz-300Hz, and did a quick recording. I know this isn't necessarily what you guys were after, but it's just a demonstration of the output at the low end:


I don't honestly recall any sound effects playing during the scan in the movie. I will confess that I've just grabbed a totally garbage theater-cam download of 2049 to check as a reference. It actually does seem like there is a sound, but it's almost the same exact noise as my start-up whine sound effect:


That is... disappointing? Because while I can make it make that noise easily - just re-use the same sound as plays during the start-up whine - that'd be a bit underselling what the device is capable of. Still, my gut tells me that I should probably just use the wind-up whine for the scan to match the screen accurate audio.
 

Attachments

  • SdJ4s83l.jpg
    SdJ4s83l.jpg
    32.7 KB · Views: 83
  • C7q4P4ml.jpg
    C7q4P4ml.jpg
    25 KB · Views: 95
  • SdJ4s83l.jpg
    SdJ4s83l.jpg
    32.7 KB · Views: 85
  • SdJ4s83l.jpg
    SdJ4s83l.jpg
    32.7 KB · Views: 71
  • SdJ4s83l.jpg
    SdJ4s83l.jpg
    32.7 KB · Views: 85
  • C7q4P4ml.jpg
    C7q4P4ml.jpg
    25 KB · Views: 81
  • C7q4P4ml.jpg
    C7q4P4ml.jpg
    25 KB · Views: 83
  • C7q4P4ml.jpg
    C7q4P4ml.jpg
    25 KB · Views: 88
Last edited by a moderator:
You are being way too kind. I had a computer science class in high school in Ye Olde Days where they taught me a bit of Java, and I've just been leaning on the concepts from that while I've been picking up C++ and the weird features of the Arduino / Teensy IDE. Once you understand the basics of programming a lot of the process comes down to per-language quirks that help you get what you want done. If I'm being honest, I enjoy this kind of work far more than the law, but I'm still just an amateur and I can guarantee that someone with a proper computer science background reviewing my code would probably be horrified by rules I don't even know I'm breaking. If it works, it works, though!





OK. This is a totally fair critique, but I'm not sure which direction to go to improve it yet. Lets talk about this bit in particular.

The "scanning" audio is something I made up outright. The Teensy can play audio files through the speaker if they are formatted correctly and stored in flash memory, but they take up a ton of space - comparable to bitmap graphics - and would be somewhat poor quality without a dedicated sound board. The way I've been generating sounds with this setup is the same as what you could do with a piezo buzzer, basically - just tell it to oscillate one of the output pins at a certain frequency for a certain duration. For example, the current scanning sound effect code is a loop that simply calls a line:

tone(TONE_PIN, random(3000,8000), 20);

This is basically an instruction to generate a random frequency between 3000hz and 8000hz and push it out to the speaker pin for 20 milliseconds. Not fancy on its own, but when you are hitting that line in a quick loop it makes that tinkling cadence that currently serves as the scanning audio. It actually looks really neat on a spectrum analyzer, because you can see the individual points representing the random frequencies it's generating:

http://i.imgur.com/SdJ4s83l.png

Likewise, the startup whine is simply a climbing frequency that increments throughout a loop and ends up being a fairly good logarithmic curve:

http://i.imgur.com/C7q4P4ml.png

All this is to say that whatever audio I end up using will probably have to be generated in this fashion, out of basic frequencies. Asking for it to produce something like a "bass" sound is a challenge. According to the datasheet, the speaker itself has an effective range in which it can generate audio from as low as 200Hz to as high as 20,000Hz, but my observations are that lower Hz values tend to be quieter. Just as a quick test I knocked the frequency range down by an order of magnitude, to 100Hz-300Hz, and did a quick recording. I know this isn't necessarily what you guys were after, but it's just a demonstration of the output at the low end:

https://youtu.be/qlggp2P2UuA

I don't honestly recall any sound effects playing during the scan in the movie. I will confess that I've just grabbed a totally garbage theater-cam download of 2049 to check as a reference. It actually does seem like there is a sound, but it's almost the same exact noise as my start-up whine sound effect:

https://youtu.be/RloHtvZ-ymM

That is... disappointing? Because while I can make it make that noise easily - just re-use the same sound as plays during the start-up whine - that'd be a bit underselling what the device is capable of. Still, my gut tells me that I should probably just use the wind-up whine for the scan to match the screen accurate audio.

Don't get me wrong it wasn't a criticism, more an observation. Let's be honest, the original prop made no noise at all and it was all added in post prod. So you're doing something far beyond what even the original device was capable of, and that is to be applauded. But... If there's ever potential for a replica to be just that little more authentic, i'll always push for it.

To me, the scanning noise definitely sounds like a fairly drawn out climbing frequency with some sort of modulated noise going on underneath the whine also, but my ears definitely hear a sequence of beeps and chirps at the end of the climbing 'whine' as K concludes his scan (confirming a successful match perhaps?) It's really difficult to tell with that quality of audio, but perhaps for the scan noise you could string two sequences together? Once climbing 'whine' whilst the button is held, than a sequence of random beeps once the button is released and the display appears?

Again, it's difficult to suggest much with confidence without a really clear version of the audio in that scene.
 
Last edited:
I’ve learned a tremendous amount throughout my work on this project about coding in C++, and I’m going to be making another one of those long-winded code-centric posts here, so I apologize in advance. Honestly, the reason I’m doing these kinds of posts is because I don’t do programming work all that frequently. I am posting this as much as a reference for my future self as anything, as I’m sure next time I’m working on coding something it’ll be months away and I’ll have handily forgotten most of what I’ve been learning here.

As I had mentioned in one of my prior posts, I had been using Paul Stoffregen’s ILI9341_t3 library to drive data on the TFT display, and for the purposes of static information (pictures, text, whatever) it was more than sufficient. However, animation of these elements would prove to be a significant obstacle. The basic idea about how these screens work is that you push information to them and they stay in that state until something overwrites them. A basic instance is a single pixel – you can tell it to display a red color, and it will do so indefinitely so long as the device remains powered and it does not receive any subsequent data to change that specific pixel to a different color. However, this means that any region that might be subject to animation is essentially a three-step process:

  • Draw the initial graphical element;
  • Draw the change in the graphical element;
  • Draw the final state of the graphical element.
If you have the memory for it, you can basically create a ‘frame buffer’, or memory construct that can do these calculations invisibly, and once they are done simply update the necessary result to the screen. However, a 240 x 320 screen is 76,800 pixels, and it probably goes without saying that even with the expanded memory of a Teensy 3.2, we don’t have the necessary overhead for that kind of process here. Another approach you can take is to re-draw everything after a change, which does work, but ends up being hugely inefficient. As an example, anything that would cross over the UI lines would necessitate the UI lines be re-drawn. You can call the function that draws the UI, but it will end up re-drawing the UI lines on the entire display, which in the context of animation begins to cause a flickering effect as these areas are continually re-drawn. Not great.

The solution to a lot of my headaches came when I discovered KurtE’s ILI9341_t3n library. His changes to Paul Stoffregen’s library were fairly complex, and not at all designed for compatibility with Teensy 3.2. Instead, he rewrote a lot of things to take advantage of some extra tricks and features present in the Teensy 3.5 and 3.6controllers. He actually did implement the frame buffer concept described above and also modified the serial communication methods substantially. None of these had a glimmer of hope of working on the cheaper Teensy 3.2 boards.However, one big piece of functionality that I noticed he added was the ability to create ‘clipping’ regions to restrict the ability to draw on the screen to a modifiable rectangular region. In practice, you could basically call the ‘setClipRect()’ function, and any subsequent drawing function calls after that point would only be able to draw inside this region. Once done, you could then reset the region with another call to make the entire screen writable again. A simple graphical illustration here should get the idea across:



With the use of the clipping region prior, the exact same function call now only draws on a portion of the screen. This is a practically small change, but has huge implications. Now, we can set a clipping region and only re-draw the areas of the screen that may require it for animation as necessary. This all but eliminates any display flickering, as it is no longer trying to update things across the entire 240×320 space of the screen, but a much smaller area instead.In order for this to work, a large number of the basic drawing functions all had to be re-written to include a preliminary check at the beginning to see if the area about to be drawn is inside the ‘approved’ clipping region. It took me an afternoon of careful splicing, but I was able to extract the clipping region code from KurtE’s library and built it into my local version of the TFT library I had been using. For something like the writeRect function in the image example above, this is a sample of the type of code that needed to be inserted:

Code:
    uint16_t x_clip_left = 0;  // How many entries at start of colors to skip at start of row
Code:
    uint16_t x_clip_right = 0;    // how many color entries to skip at end of row for clipping
    // Rectangular clipping


    // See if the whole thing out of bounds...
if((x >= _displayclipx2) || (y >= _displayclipy2)) return;
if (((x+w) <= _displayclipx1) || ((y+h) <= _displayclipy1)) return;


    // In these cases you can not do simple clipping, as we need to synchronize the colors array.
    // We can clip the height as when we get to the last visible we don't have to go any farther.
    // also maybe starting y as we will advance the color array.
    if(y < _displayclipy1) { int dy = (_displayclipy1 - y); h -= dy; pcolors += (dy*w); // Advance color array to y = _displayclipy1; } if((y + h - 1) >= _displayclipy2) h = _displayclipy2 - y;


    // For X see how many items in color array to skip at start of row and likewise end of row
    if(x < _displayclipx1) { x_clip_left = _displayclipx1-x; w -= x_clip_left; x = _displayclipx1; } if((x + w - 1) >= _displayclipx2) {
        x_clip_right = w;
        w = _displayclipx2  - x;
        x_clip_right -= w;
    }

As a practical matter, I had some concerns that there would still be a fair bit of device slowdown, as even if you’re not drawing outside of the clipping region the functions still have to sweep progressively through the bitmap data to figure out what needs to be drawn. In practice, Kurt’s code is smartly designed to minimize the amount of time spent passing through any areas outside the clipping region, such that there is virtually no hit to performance that I was able to detect. I had to leave most of the rest of his library modifications out of my code as they wouldn’t work for the Teensy 3.2, but it’s safe to say that without this extra functionality I wouldn’t have been able to do most of what I have accomplished with the screen to date.Animation is not just changing information on a screen, but also a function of change over time. You can control time constraints inside an Arduino environment with a simple delay(); call, which allows you to basically stop whatever it’s doing for that period.

Say, for example, you had an object, and wanted to move its X position from 0 to 100. You could simply increment the x variable of that object in a loop, and after each change you could ask the microcontroller to delay(100); to make sure it moved at the pace you wanted it to across the screen. The problem arises, however, when you have multiple things that you’re asking the microcontroller to do at once. During a delay() call, nothing is calculated or advanced. No other elements could move simultaneously on the screen. Even button presses from the user would go ignored, which would make for a very frustrating experience. This is called ‘blocking’ in arduino parlance, and there are a number of resources that do a muchbetterjob of explaining the concept and issues here than I can afford the time for.

The way we typically deal with these scenarios is to avoid the use of delay() entirely. Instead, we create timers on the arduino by using the millis() command, which reads the number of milliseconds since the program started. This gives you an absolute reference to a value of time that is always increasing, and based on this value you can determine if enough time has passed to actually take a step in multiple different, simultaneous processes. To give you a pseudocode example:

Code:
 int interval = 1000;
void loop(){
timer1Current = millis();
if(timer1Current - timer1Previous > interval){
                doThing1();
                timer1Previous = timer1Current;
}
timer2Current = millis();
if(timer2Current - timer2Previous > interval){
                doThing2();
                timer2Previous = timer2Current;
}
}
// and so on.

As the loop repeats, the value of millis() is always climbing, but by referencing it against a previous value of the last execution of the function and an interval value to tell us how far apart the executions should be, we can now have code that executes in near-simultaneous fashion without obstructing itself.
Kurt’s was not the only library I was able to integrate and take advantage of. Although it’s entirely possible to do all of the above timing without any fancy library assistance, it becomes cumbersome. I adopted pfeerick’s elapsedMillis library to help me with the tracking of separate animation loops, which dramatically simplified the work involved. Now, each loop could simply have a designated timer object assigned to it, and each timer could easily be reset after the associated functions were run, all with basically a single line of code each. Way less of a headache.

Another aspect of animation is the idea of easing. Simply moving an object from X = 0 to X = 100 by increment the value of X is fine, and it will move in a linear fashion from start to finish. However, it will look unnatural; nothing in nature moves linearly from one point to another. In reality, things tend to accelerate or decelerate as they move. Our brains are wired to expect this kind of motion, so when animating we should be looking to duplicate these natural acceleration changes. Easing is a way to simulate this effect by replacing these linear changes with ones that ebb and flow. From what I have seen, a large amount of modern easing calculations stem from Robert Penner’s Easing Functions. Robert is a figure from Ye Olde Days of Macromedia Flash and the early internet, and he created a fairly comprehensive set of reusable easing functions that have been embraced in as many programming languages as I am aware of. The idea is that you simply feed the function your start current time (in whatever form you want to use – it can be seconds, steps of a process, any measurable interval), a start value, the desired total change in value, and the total duration, and it’ll spit out the required result for how far along in the animation you are. Gizma.com has a really straightforward interactive demo of how easing works that you can play around with to get the general idea of how these play out in practice. I tracked down an Easing library on GitHub by user “chicoplusplus” which included these functions in a tidy and accessible way for arduino usage, and tacked it into the VK’s program.

The final library I leaned on here was one simply called “Bounce2” by Thomas Fredericks. This one is a bit more esoteric in nature, and has nothing to do with animation, despite what you might think. Instead, it handles button input on the arduino. The primary article on the Arduino.cc page does a good job of explaining the issue, but the simple summary is that when you push a button, it doesn’t do exactly what you think it does. Mechanical, physical, and even electrical issues inside the physical buttons mean that when you push one to make a contact, it doesn’t do so in simple on-off fashion. Instead, you often get noise (sometimes called ‘chatter’) during the transition from on to off before the button reaches a ‘stable interval’ and is effectively switched. A visualization of the signal helps understand the issue:



Getting buttons to behave consistently is a two-step process. First, you have to make sure that the input pin you are reading the button from is not ‘floating’. Input pins can be very sensitive to change, and unless they are deliberately set to logical LOW or HIGH will pick up stray capacitance from nearby sources. We’re talking anything – breadboards, human fingers, a charge in the air, Wifi, the background radiation of the universe. All of it will create chatter and unpredictable results when attempting to read a pin. We eliminate this with a resistor connected to the button that connects to either a voltage source (3.3V, 5V, whatever your high value is) or ground. These are called “pull-up” or “pull-down” resistors, depending on which way around you have them hooked up, and are essentially included to eliminate the chance that your button-reading input pin receives stray ambient electrical noise.

Handily, the Teensy (and a number of other Arduino) have internal pull-up and pull-down resistors, so we can skip actually needing to add extra electronics components! Instead, we simply use the pinMode declaration:

pinMode(SCAN_PIN, INPUT_PULLUP);

This sets the internal resistor as a pull-up resistor for the pin in question, meaning we will no longer be getting ambient noise from stray electrical signals that might otherwise be misinterpreted as a button press.

The other step of the process is called “Debouncing”, which is done in code. This sets an interval during which any changes in a digital signal will be ignored, allowing the ‘chatter’ from a button press to be disregarded. The intervals don’t have to be long at all – something like 20 milliseconds is more than sufficient for most tactile switches – and ensure that the reading you get from a button is showing you a deliberate button press. The Bounce2 library makes this trivially easy – you simply make a Bounce object for each pin you want to monitor and it does the job of parsing the signal changes out into meaningful logic.

One of the other significant changes since my last post was the discovery of additional functionality in the ILI9341_t3 library for alternative ways of drawing Bitmaps. Up til that point, I had been drawing multi-colored images with the drawBitmap() function, which took a uint8_t array of bitmap data and a single color and drew that information out in a top-to-bottom pass. This worked fine, but in order to draw an image with, say, four colors, you had to have four separate image arrays and call each one in sequence with the correct color. This was tedious and honestly a bit slow in terms of how quickly the images would appear on the screen.
Turns out, there are better ways of doing this! The writeRectBPP functions in the library are specifically designed for this kind of thing, but I didn’t realize it until I got well into reviewing how everything worked. The “BPP” part of the function stands for “bit-per-pixel”, and indicates the color depth of the drawing function in question. 1BPP is basically what I had been doing – you have two colors, generally “on” or “off” while drawing the bitmap. 2BPP expands that range to 4 colors, which is analogous to what I had been inadvertently doing before. But 4BPP… that is where things get interesting. That is the realm of 16 colors, which may not sound like an awful lot, but actually begins to make images actually look realistic, rather than something that’s coming off a Commodore 64.



The effect is generally helped by the fact that we’re colorizing all of these images to be primarily blue, which means we don’t have to have as big a palette.
The writeRect4BPP function looks for a uint8_t array of pixel data and also uses a uint16_t array to serve as a palette of colors while drawing. With this information, it can draw out a 16-color bitmap in a single pass far more efficiently than what I had been doing, and in a way that looked dramatically better. The only challenge to using this method was how to convert my images into the correct data formats for this method. I tried a number of methods before stumbling onto what appears to be the ideal tool for the job – a company called “SEGGER” has a bitmap conversion tool (“BMPCVTDEMO”) that makes this process extremely straightforward.



You can trim the color palette down to 16 colors easily, and the software even allows you to save the bitmap as a .c file, which contains all of the necessary bitmap information in an unsigned char array, which happens to be equivalent to a uint8_t array like what the function is looking for.



The same file also includes the necessary palette information. However, it was not without some drawbacks! Actually establishing what the color palette values should be is a bit of a pain in the ass, as the ILI9341 display libraries are generally looking for color information in RGB565 format (which was the subject of one of my previous write-ups). The format that this puts out is RGB888, which is stored in uint32_t format rather than uint16_t which the drawing function expects. This won’t work without some tweaks. In order to be able to use the palette data that BMPCVTDEMO.exe was putting out directly, I overloaded the writeRect4BPP function to accept an additional type of palette data in uint32_t format:

Code:
void ILI9341_t3::writeRect4BPP(int16_t x, int16_t y, int16_t w, int16_t h, const uint8_t *pixels, const uint16_t * palette )
{
                // Simply call through our helper
                writeRectNBPP(x, y, w, h,  4, pixels, palette );
}
 
void ILI9341_t3::writeRect4BPP(int16_t x, int16_t y, int16_t w, int16_t h, const uint8_t *pixels, const uint32_t * palette )
{
                //            We've got 32-bit color data incoming here (probably from SEGGER BMPCVTDEMO.EXE), need to shift it down to 16-bit.
                uint16_t newcolors[16];
                for(int i = 0; i<16; i++){
                                newcolors[i] = color888to565(palette[i]);
                }
                writeRectNBPP(x, y, w, h,  4, pixels, newcolors);
}
It then runs the palette colors through another function called “color888to565” which handles the color conversion from RGB888 to RGB565 and returns the expected uint16_t palette of colors:
Code:
   uint16_t color888to565(uint32_t color888){
                                int red   = (color888 >> 16) & 0xff;
    int green = (color888 >> 8 ) & 0xff;
    int blue  =  color888        & 0xff;
 
                                return ((red / 8) << 11) | ((green / 4) << 5) | (blue / 8);
                }

With this in place, we can use the output from the bitmap conversion program pretty much directly, making my life substantially easier if I end up in a position where I am personalizing things like the user/operator image for potential clients or customers.

With the preliminary structural code changes outlined above out of the way, I had to dive into actually taking all of these separate pieces and making them move. The first obvious issue is how to animate screen-to-screen transitions – from the LAPD start screen to the main UI, or the main UI to the profile page, for example. Here’s where I uncovered a new trick inherent to the ILI9341 display: scrolling. Review of the datasheet for the device (page 123) reveals that the display driver built into the ILI9341 TFT actually has scrolling functionality, at least in the sense that it can move data up or down on the screen without changing the actual pixel data. By writing a command to the VSCRSADD memory address, you can give the device a vertical offset value and it will move the screen up by that amount in pixels, wrapping the overflow back around to the bottom of the screen. Here’s an example of the start screen offset by 120 pixels upwards to illustrate:



Unfortunately, it can only do this in a vertical direction – I guess whoever manufactures this didn’t see the need for horizontal, or assumed you’d find another way of doing it. You could with some substantial work, I wager, but it wasn’t worth the trouble for what I was doing. The key feature of this functionality is that, combined with the clipping region changes I implemented, I suddenly had a way of faking screen-to-screen transitions in a smooth fashion. As you push the image up and off the top of the screen, you draw the background color over the parts that are wrapping around to come up from the bottom.



The final part of the transition is to use the same technique to start drawing the screen you’re arriving at, utilizing the clipping rectangle to ensure that you are only drawing the region of the screen that is appearing, rather than the entire UI. Slap one of the aforementioned easing functions on the process (which is not as easy as that sounds, but still plenty doable) and you have natural screen-to-screen transitions that require very little actual memory usage.

Still with me? Probably not, but don’t worry, I’m relentless.

The eye analysis is one of the biggest parts of the actual animation process, and to that end received basically an entirely new class of variables and functions while I was setting everything up. In my mind, the way the portable Voight-Kampff scanner would function is in two big phases – one phase where it recorded or captured image data about the subject being scanned, and another phase where it analyzed that information. I wanted to make the whole thing a fairly dynamic event – the scene where the device is used in Blade Runner 2049 is during a brawl, and the target puts up a decent fight to resist being scanned. The image would have to sway and move, either to simulate the operator’s hand not being perfectly steady under those conditions, or the movement and struggle of the targeted suspect. Mirroring some of our current camera-phone or webcam technology, I thought the image should also go randomly in and out of focus as the device attempted to resolve a clear enough image of the eye.

During the initial ‘recording playback’ phase following the scan, all of these elements come into play. In simple algorithmic terms, there are separate functions and timers generating a random X and Y sway for the image, and a third that periodically adjusts the focus of the image. The images themselves are 140×140 pixels, but using the setClipRect() function are constrained to the 120×120 pixel region on the UI for the eye graphic, which means I can have a maximum of ten pixels sway up, down, left or right. In practice, I found that a maximum of about 5 or 6 pixels in either direction was plenty, and the full 10 ended up being a bit too much movement. The blur is something I played with a lot while I was setting it up – initially I had up to six levels of blur, to make it a really smooth transition, but that required a lot of bitmap data. In the end, I settled for having two levels of blur – a 1 pixel Gaussian blur, and a 2 pixel one – which felt good enough to get the idea across.



These all play for a randomly-determined length of time – between two and three seconds – while the device simulates a review of the image data. Once that period of time is up, there is a milliseconds-long phase where it re-centers the image and puts it back in focus before proceeding, essentially to show that the scanner has obtained a lock-on of the necessary image.

Then we step into the “zooming” phase of the animation. I ended up slightly borrowing the way this process looks from the demo reel Territory Studios (the company responsible for the film’s screen graphics) put together for Blade Runner 2049.



Basically, the vertical bar will sweep down from the top of the screen and, in my version, passes back up. As it sweeps, in the wake of the bar it leaves the image of a zoomed-in eye. These zoomed in images were pre-baked simply because the Teensy didn’t have the memory required to do advanced bitmap functions like scaling and interpolation in addition to everything else I was having it do.



I settled on two steps of zoom for what I needed, though that was more out of consideration of the available memory on the Teensy 3.2, rather than my own desire – I would have gladly put more intermediate bitmaps in otherwise.

The final trick, once we arrived at the zoomed-in eye, was what to do about the actual readout the device would show. In the event the target is human, nothing of consequence should really be visible. On the other hand, if the target is a Replicant, we should be finding their Nexus ID number on the white of the eye. I actually do have the program very faintly print the ID number over the eye image during the last level of zoom-in, and a keen observer can spot the watermark of this code before the analysis completes. This only shows when the target is a Replicant – no text is added over ‘human’ eyes.

I included a brief animation where it draws 50 or 60 red “+” marks on the lower half of the eye image as the scanner engages in another pass of ‘analysis’, this time searching for the Nexus ID. Once this pass is completed, and if the target is found to be a Replicant, it’ll highlight the discovered ID number on the eye in red, letter-by-letter. It also transcribes this data into another field on the left side of the screen as it goes.

In total, I’ve got 4 sets of eyes built into the device at the moment. I would have gone for more, but I’m simply constrained by memory. Even the Teensy’s beefy 256k of flash starts getting full when you are throwing as many 16-color bitmaps at it as I have been. Still, I think there’s a reasonably good variety here, and the device is configured to never do the same eye graphic twice in a row, so hopefully they don’t get stale.



Naturally, Maria and I ended up using pictures of our own eyes in this mix. :)

The rest of the UI plays out about how you’d expect – text fields populate information about the target that can be ascertained, and the thumb print fields up at the top animate in a sequential grid fashion to draw their fingerprint images out, assuming the device ‘finds’ fingerprints for the target. I somewhat simplified the other ‘analysis’ elements – I removed the “GCAT” label from the horizontal bars beneath the eye image, thinking that the implication that the device could actually scan someone’s DNA might be a step further than was warranted by its size and simplistic nature. Without the DNA-associated text, it simply forms a neat spectral line element that I think works rather nicely. Likewise, my initial thought for the field of ‘blips’ on the left side of the screen would be to draw target marks out across the eye, but that got replaced by my decision to actually write the Nexus ID letters onto the eye graphic and then pick them out after. Both of these elements animate briefly and, if the target is a Replicant, lock down with red markers that are intended to convey that synthetic elements have been detected.

The profile page has its own selection of animations, but most of them are dedicated to outputting the necessary text elements, and don’t have quite the same level of involved programming as the eye analysis animations, and therefore won’t be something I spend as much time going through. I will say that all of the information that gets displayed is pulled from a dialog file that is very easy to edit, and all of the output is responsive to any changes made to that dialog file. If I ever decided, for example, to re-write the text field at the bottom of the profile page that announces the device user has the authority to detain, identify, or retire individuals, I would simply have to make those changes in the dialog file and the animation and UI would update accordingly to ensure that everything fit and was spaced out properly. I may never end up using that functionality as I may never end up changing the text that’s there, but I know future-me will be grateful if I ever do need to do that.
 
If look earlier in the scene with Sapper, when K pulls out the scanner while sitting at the table and turns it on, a few seconds later he tests the scanning button for a second. That would be the best place to determine how it sounds when just scanning. To my ears, the frequency sounds higher than your first version.

Amazing work, by the way.
 
If look earlier in the scene with Sapper, when K pulls out the scanner while sitting at the table and turns it on, a few seconds later he tests the scanning button for a second. That would be the best place to determine how it sounds when just scanning. To my ears, the frequency sounds higher than your first version.

Amazing work, by the way.

This is a good call - I hadn't remembered that he tested the scanning button while sitting there. I went back and did the best I could to clean up the totally crap-quality audio I have to reference from that scene in the film. Unfortunately, Sapper talks over some of the initial noise that follows the device snapping open, but it's enough to tell that I have some changes to make. Here's the best I could isolate - first sound is the startup, second sound is the 'test' that I guess should be the scanning noise.

http://tindeck.com/listen/ddvkk
 
This is a good call - I hadn't remembered that he tested the scanning button while sitting there. I went back and did the best I could to clean up the totally crap-quality audio I have to reference from that scene in the film. Unfortunately, Sapper talks over some of the initial noise that follows the device snapping open, but it's enough to tell that I have some changes to make. Here's the best I could isolate - first sound is the startup, second sound is the 'test' that I guess should be the scanning noise.

http://tindeck.com/listen/ddvkk

Here's a little startup sound test I played around with at the weekend, if it's any use to you - http://www.candykiller.com/extras/scanner.wav
 
Here's a little startup sound test I played around with at the weekend, if it's any use to you - http://www.candykiller.com/extras/scanner.wav

This actually has been really handy, thanks. I have no idea how you came up with that audio, but it sounds pretty much exactly like what I was wishfully hoping I could have extracted out of the footage I could find.

The Teensy 3.2 can play direct sound effects that are encoded onto the board in the same way that it can show bitmaps that have been converted into arrays, but the memory issue is always going to be a significant hurdle for that kind of thing. At ~11Khz rate, in 8-bit compression, the device can hold about 20 seconds of sound (and nothing else). Right now, with my bitmaps and other data I'm around the 90-95% mark on the Teensy's memory as-is.

There are definitely solutions for that, but they involve adding additional hardware - specifically, something like this audio adapter board, which handles some of the sound calculations and also gives you a microSD card reader to hold the sounds. It'd probably work, assuming you could budget the extra space inside the device for it, but it'd make each device more expensive. Moreover, I don't actually know if there would be an issue with using a board like this and the TFT display, as they both use SPI communication, which means they both want to use the same pins on the Teensy for different things.

I'm stubborn anyway, and I wanted to see if I could duplicate the same sounds with the tone() function or similar approximations anyway, so I spent a bit of time this morning just trying to match this audio through trial and error. I've swapped the default tone() library out in favor of toneAC(), which is a nice little package although I'm not using it properly. toneAC is supposed to use two pins on the arduino instead of a pin and a ground, with the idea being that it would end up making your speaker twice as loud. Whatever - the real useful part of this to me is the fact that even if you're using just one pin on the microcontroller, this library now lets you set volume levels by tweaking the duty cycle. The normal tone() library can't do volume control, so sound effects just end abruptly, but toneAC() seems to be able to fade the signal out. The trade-off is that the waveforms generated by toneAC sound a bit rougher. I can live with that.

I grabbed @Candykiller's file and dragged it into an audio editor so I could check the frequencies out.



You can actually isolate just one of those lines, which is something I didn't know until this morning. It helps make analysis a little simpler.



That single line ends up sounding like this:

http://tindeck.com/listen/pzjqx

I actually think it goes on for too long. I know the wind-up audio in the scene in the film does drag out over a few seconds, but I want to get this thing played and done with inside of 3 or 4 seconds so that the device can move on to doing other things, rather than keeping the user waiting prior to doing a scan.

Code:
 struct startupWhine{   bool complete = false;
   int startFreq = 550;
   int currentFreq = startFreq;
   int endFreq = 6050;
   int step = 0;
   int duration = 250;
   int timerLimit = 10;
   elapsedMillis timer;
 };


void Animation::startupWhineEffect(){  if(whineAssist.timer > whineAssist.timerLimit && !whineAssist.complete){
    if(whineAssist.step <= whineAssist.duration){
      whineAssist.currentFreq = Easing::easeInSine(whineAssist.step, whineAssist.startFreq, whineAssist.endFreq-whineAssist.startFreq, whineAssist.duration);
      int vol = Easing::easeInSine(whineAssist.step, 10, -10, whineAssist.duration);
      toneAC(whineAssist.currentFreq, vol);
      whineAssist.timer = 0;
      whineAssist.step++;
    } else {
      noToneAC();
      whineAssist.complete = true;
    }
  }
}

I built this function into one of my animation objects so that I could utilize my animation easing functions and get the audio output to curve a bit more gradually. I've tried nearly every easing function I can think of - quadratic, cubic, easing in-out rather than just in - and I think the Sine one gives me the best approximation of what I'm after. The one I've got coded up here basically slides from around 550Hz at the start up to ~6000 at the end, though it doesn't really accelerate up that scale until more than halfway through the audio. As I mentioned, it's a fair bit shorter just so that I can get the device to a ready-to-scan state as quickly as possible.

Here's my rough attempt at duplicating @Candykiller's file on the Teensy (recorded using my PC mic)

http://tindeck.com/listen/gprug

I did notice an interesting problem: The audio can get locked or stutter slightly while the Teensy is handling other animations. Here's a spectral analysis of the startup sequence:



There's two big spots where it looks like it gets 'stuck', which look to coincide with the LAPD start screen logo animations. You can see them as the flat lines on the graph.



It doesn't sound quite terrible, but it is a little noticeable. I've been trying to get around it by increasing the timer rate for the function that generates the audio during those animation sequences (down to 5 milliseconds), and returning it to normal (10 milliseconds between steps) after the animations are done.

What do you guys think? Is this moving in the right direction?
 
This thread is more than 5 years old.

Your message may be considered spam for the following reasons:

  1. This thread hasn't been active in some time. A new post in this thread might not contribute constructively to this discussion after so long.
If you wish to reply despite these issues, check the box below before replying.
Be aware that malicious compliance may result in more severe penalties.
Back
Top