JARVIS ver 3.0

So Jarvis's brains had to be upgraded/sidegraded? tonite. This is due to some technical jargon with the Raspberry Pi however I have another small board computer that is hopefully up to the task. Its a Udoo Dual board and it has an integrated Arduino with a crazy amount of program storage and lots of ram :). One of the only things I haven't got figured out is if it will take a webcam or if I have to switch the special Udoo webcam they have available. I should know by the end of tomorrow whether or not everything is going to work out well.

Edit: Jarvis must hate me because he can't figure what brain would be best lol, The Udoo board is out the question so I'm going to have to make it work with the Rpi. Hopefully I can have a photo or video up by the end of the day.
 
Last edited:
Out of curiosity...

Why cant you use the Udoo any longer?

What was the initial 'technical jargon' with the Pi you faced that led you away from it?
 
The problem with pi is due to the fact that it only has one available UART for serial communication from the arduino. Basically I'm using a Arduino Zero with a little bit of muxing the output pins to turn them into serial ports. The problem I have right now is that I need atleast 2 UARTs, 1 for the 10-dof sensor and another for other incoming data for stuff like "replusor status", eyes, etc. I have major OCD when comes to this because I would like to be able to just have a couple of wires hook up straight to the Pi GPIO's and be done with it, however; if I have to go the route of using the single UART GPIO pins and then also a USB I will. I am going to look into possibly using I2C for the other data but I don't know if thats going to work especially since the 10-dof is connected to the arduino via I2c and that makes the arduino a master device. If I were to then hook up the arduino to the RPi, I would have 2 competing master devices on the line and no slaves. That would take a lot more coding than I actually want to do to get it work correctly. The problem then goes to the Udoo board which is a very nice board but I found out last night that it has alot of issues with the Processing 3 program and OpenGL. I'm going to continue with the PI for now but will also look into seeing if I can get the Udoo board up and running because that board I've had for a while and it's been collecting dust. I'd like to put it to work in this project maybe.
 
So this is just a general question for everybody that may want to put this system into an IronMan/WarMachine suit. What kind of features in the HUD would draw you to using this/ what features would you like to see me add? I've got a few and one I'm keeping secret for now but you will all like it when it's finished. :) Let me know in the thread I'll see what I can do about adding them.
 
It's been a few months since I last posted but in that while I've been busy. I've set out to create a working a "J.A.R.V.I.S" of sorts. Basically I've taken my Mark 42 helmet, printed it and gave it the Mark 43 paint job. That's where things get mildly complicated. In the helmet, I've added my vr recognition module, and gave it a pumped up Arduino for program storage and more RAM. With this new Arduino, I'm going to create a bunch of predetermined questions and answers along with the basic yes/no type answers. In the helmet as well i'm adding the basic stuff like helmet servo, lights for the eyes and a jaw servo for movement when opening the face plate. However in the helmet itself, I'm adding a 320x240 screen for a hud readout from a 10 DOF (degrees of freedom) sensor, battery life, and replusor status, etc. I'm going to get this feedback from the forearm via a bluetooth chip and that way I can make the forearm do cool stuff via J.A.R.V.I.S in the helmet. An idea I plan on implementing is ULTRON and you can see the initial test for the forearm below.

As of now the forearm does "fire" because it is hooked up to an emg sensor. Basically this sensor reads electrical pulses in the muscle. However when I made these videos I turned that portion of the code off because of "false" readings would make it continually "fire". I also want to add that nothing in these videos is final because these are the first tests I've had time to do with the code.

I'll make more updates as I progress in the code.

- devildog12

JARVIS TEST 1
https://www.youtube.com/watch?v=GpSE-snBrGQ


ULTRON TEST 1
https://www.youtube.com/watch?v=kGZF1Kw17fg

Get the **** out of here?!?! This I gotta see. I'm interested in how hot gonna work that his out where you can see it when it's so close to your eyes.


Sent from my iPhone using Tapatalk
 
So I'd thought I'd share a quick update for everyone today- I've been doing some looking at older threads in the forums it seems that @LexiKitty started this type of project a few years ago hasn't been back. I've tried messaging through here and through her blog site about how much progress she made what kind of problems she ran into. I also noticed I think it was on her blog site that she had some files that weren't able to run on the Raspberry pi at the time. I've downloaded her files, redid the code for it to work on the Pi. I thought to myself, why not base my work off her's and add my own flare. (I don't really want to reinvent the wheel). I used some of her code for my foundation and have added a lot more. It is very usable and fluid. I also really like her HUD layout as well. I've been having some issues on the final decision of the small board computer that I want to use. Basically I've come down to using the Pi for the main fact that I want anybody to use this for future suits and the fact of cost. I have boards in my possession that would quite easily handle this sort of thing without too much effort however the Pi is widely available and so are Arduino's. I'm gonna try and upload a video or two of this thing in some type of action. :)

Link to LexiKittys thead : The Iron Girl Project
Big Props to LexiKitty on her progress
 
hi everyone.

so i just want to through this in here because i havent seen any post or research done into it. there are boards that run this kind of processing of huds rather easily.
you can look into betaflight f3. these are race drone flightcontrollers. but they do have telemetry feedback from the drone to the fpv goggles. which means orientation. depending on whether you have gps installed in the drone you can get airspeed. battery status...signal strength. and all those other yummy goodies which would be really nice to have in a project like this.

from what i have noticed is that you are trying to have one processor do all the work. why not rather have the pi do the video and on screen display rendering but the actual information received from something like an arduino? and just feed it in there? and instead of trying to generate pictures which takes lots of processing power that the RPi dont yet support....why not first do a text based gui?
 
-deleted post-

- - - Updated - - -

hi everyone.

so i just want to through this in here because i havent seen any post or research done into it. there are boards that run this kind of processing of huds rather easily.
you can look into betaflight f3. these are race drone flightcontrollers. but they do have telemetry feedback from the drone to the fpv goggles. which means orientation. depending on whether you have gps installed in the drone you can get airspeed. battery status...signal strength. and all those other yummy goodies which would be really nice to have in a project like this.

from what i have noticed is that you are trying to have one processor do all the work. why not rather have the pi do the video and on screen display rendering but the actual information received from something like an arduino? and just feed it in there? and instead of trying to generate pictures which takes lots of processing power that the RPi dont yet support....why not first do a text based gui?





I think you're misunderstanding what's going here. The Raspberry pi does do all the video processing and it also handles the information received from an arduino(s). The problem up to this point is that the Raspberry Pi before last year or the year before(2015) was that I would have to pretty much code this entire project by hand in python and even then I wouldn't have had the frame rate reliable. Now with the Processing IDE on the Raspberry PI 3 with it's quad core processor and 1 GB of ram, its quite capable of building a HUD. The main point of this project is to be able to make the suit more like an actual Iron Man/ War Machine suit in the fact that we can have an quasi or actual working Artificial Intelligence in the suit feeding us whatever information we want ie - AHRS/GPS/SYSTEMS/...the list goes on and on.
I have looked at fpv google for drones and the price point is too high for me and many other cosplayer/replica builders. The main reason that this is out of reach is that most drone FPV goggles have wireless capabilities and that significantly increases the price. In a self-contained suit, wireless capabilities are unneeded. As far as Arduino's go, I am using (so far) the M0 as the primary micro controller and a stripped down uno for a secondary controller. The M0 has superior storage space (256k) and 32k of RAM for gigantic projects . This far exceeds any flight controller in terms of reliability and performance. Also if things were to get really complicated, you can actually run Python coded programs off the M0. Maybe I explained things wrong throughout the thread. (I'm sorry) This project keeps growing as I keep thinking of things to add and as the time goes on, techonology is getting better which allows for more awesome creations in the HUD. Thanks devildog
 
Last edited:
Here is a quick video of the HUD up and running on the PI. The only information its got coming in is the AHRS data, I don't have JARVIS integrated quite yet (He's quite stubborn lol). I'm hoping to have him in the next week or two. As you can see in the video, I'm using LexiKitty's HUD assets just not her code. She really did do a great job on those assets. Big props again to her. Also, the audio you hear in the video is actually coming from the HUD/PI code is being routed to an external speaker and I didn't plan how almost perfect the HUD pops up on command from the audio.

 
Last edited by a moderator:
So I'm hoping to have a "pre-alpha" (I guess that's what it's called?) released in about a month. Right now I'm working on using a cloud-based voice recognition service from like Google, Microsoft or even IBM Watson however integrating those API's into an existing setup has been kind of pain so far but I'm slowly making progress. The HUD is minimally done until I can get that situation sorted. Basically what's going to happen is that you'll be able to give a command such as "Open the face plate", "Replusor Status", "what's the weather outside?", or even "how did my favorite sports team do last nite?" etc. and you'll be able to get all that info either in the HUD or it will make the arduino do the command. I have to keep setting goals for this massive project or it'll get sidelined for other things and other projects. *Hopefully* by the end of next week I can upload a video of this thing in action :)
 
So I'm hoping to have a "pre-alpha" (I guess that's what it's called?) released in about a month. Right now I'm working on using a cloud-based voice recognition service from like Google, Microsoft or even IBM Watson however integrating those API's into an existing setup has been kind of pain so far but I'm slowly making progress. The HUD is minimally done until I can get that situation sorted. Basically what's going to happen is that you'll be able to give a command such as "Open the face plate", "Replusor Status", "what's the weather outside?", or even "how did my favorite sports team do last nite?" etc. and you'll be able to get all that info either in the HUD or it will make the arduino do the command. I have to keep setting goals for this massive project or it'll get sidelined for other things and other projects. *Hopefully* by the end of next week I can upload a video of this thing in action :)
Im using an app called utter. It works great....i can control all functions on my phone....wich Im wearing inside my helmet....with a bluetooth speaker inside the chest. Check my youtube videos.

Skickat från min U11 Plus via Tapatalk
 
Going to try and get a quick video up tonite or tomorrow. I finally have google and my arduino talking the same language PLUS i have a natural language ai doing all the talking. As of right now, I am in the general infancy of getting this thing *thinking* with actual machine learning which is a major milestone. I'm pretty excited for everyone to see what I've accomplished in a few weeks with an actual AI and not a dummy one as I was using before.
 
This project is literally what i've always wanted to accomplish. Especially the hud display. I have a question about this topic, how are you going to see through the hud display inside the helmet? I mean you should buy some mini display with high pixel density (for a good resolution) if you are going to build something like google glass (there are tuts on yt). And you should face the problem with your eyes, the focus.
I'm quite curious to read your answer :)
 
bump on an old thread here....

I am curious if anyone ever found/sourced some of these ultra tiny LCD screens (OLED? AMOLED?).. with a high resolution?

I have been looking for some with only minor success....

Also on the look out for some 'fresnsal' lens.. with a HIGH (negative) diopter rating??

My HUD is coming along great.. I have no more 'technical' obstacles to overcome... now its just features (visual queues/overlays to respond to incoming actions/commands).... and the practical screen/lens set-up.

I am currently testing with a 3.5" LCD screen.. which wont work as its too close to your face..

Looking for (maybe?) 1" width high resolution lcd screens..
 
Last edited:
how can I do a camera setup ?

Sent from my SM-J7108 using Tapatalk
Raspberry Pi Zero mini-camera, attaches via small ribbon cable, plus with the Pi on-board here's your spec's.
Full blown PC pretty much, I'm incorporating basic Zarvis .mp3 audio clips (Google), and the camera mounted in the Helmet or Arc in my current build.
pizero.jpg

  • 1GHz, Single-core CPU.
  • 512MB RAM.
  • Mini HDMI and USB On-The-Go ports.
  • Micro USB power.
  • HAT-compatible 40-pin header.
  • Composite video and reset headers.
  • CSI camera connector.
  • 802.11n wireless LAN. and Bluetooth
 
Raspberry Pi Zero mini-camera, attaches via small ribbon cable, plus with the Pi on-board here's your spec's.
Full blown PC pretty much, I'm incorporating basic Zarvis .mp3 audio clips (Google), and the camera mounted in the Helmet or Arc in my current build.
View attachment 1004028

  • 1GHz, Single-core CPU.
  • 512MB RAM.
  • Mini HDMI and USB On-The-Go ports.
  • Micro USB power.
  • HAT-compatible 40-pin header.
  • Composite video and reset headers.
  • CSI camera connector.
  • 802.11n wireless LAN. and Bluetooth

Quantum Stan

he made that post/question roughly 2+ years ago! haha

* Mar 29, 2017


that being said.... I'm curious as to WHAT screen, and how you are viewing it so close to your face/eyes? :)
 
Quantum Stan

he made that post/question roughly 2+ years ago! haha

* Mar 29, 2017


that being said.... I'm curious as to WHAT screen, and how you are viewing it so close to your face/eyes? :)
That's funny I wasn't even paying any attention to the date somehow just popped up on my notifications, and the thread interested me.
Flysight fpv goggles...that's what's on hand...6.6 inches by 2.5 with curve design... Antennas can be removed because you're only just couple feet from the raspberry pi Wi-Fi.
https://www.amazon.com/gp/aw/d/B01HCWO9S2/ref=sspa_mw_detail_1?ie=UTF8&psc=1
 
and those fit inside of a helmet?

did you take it apart, and salvage the screens and lenses or something to incorporate them into a helmet???

in the link, they look to be much thicker than 1" or so? (and I'm not even sure 1" is the 'space' available between eyes and the whole display portion of this)... did you have to get new fresnel lenses or just the stock ones?

and its one 'big/long' screen you are saying? and not two individual smaller ones?

if you have pics, please share or send me some! lol

my last true hurdle is finding a solid way to display my HUD inside of the helmet.. the 3.5 HDMI aint cutting it that close to your face, thats for sure! :)

I have found some, possible options.. including some really small/high resolution displays... but they were -too- small I for me. (I believe)

ie: micro display: 0.5 inch 1024x760

** I dont even own a ZERO, but I am curious as to how decent my project would perform using one. (right now it runs off a 3B)


edit:

one last question, do they take an HMDI input?

these -seemed- to be even thinner in 'thickness' (ie: space/distance from front of goggles to your face/eyes)

Zetronix Online store

Barring dishing out $350 for a 'test'.. I was looking for just the screens, like in their pics.

not much bigger than a quarter, and 854x480 resolution... (but never found them) :(

these glasses also have HMDI input.!!.. they 'seem' like the golden ticket, but I can be sure!? LOL
 
Last edited:
Barring dishing out $350 for a 'test'.. I was looking for just the screens, like in their pics.

I think I've got a slightly better link for you, more in line with your DIY Headset desires.

Another video showing how it can function as one continuous display

The board to convert HDMI to the signal for the two screens: Anywhere from $50-$100 (I'm guessing that price varies depending on quantity purchased and specs needed)
The screens shown (2.9", 1440x1440 resolution per eye): $70 each, $140 for two
Total: $190-$240

Now, that's still expensive, but you could possibly try smaller/lower quality screens, such as these round ones that the board says it supports. They're only 400x400 per eye, but they're 1.4", which should be easier to squeeze in. At $25-$45 per eye, that would get you down to $100-$190 total. If you only want one eye to test with, then you're looking at $75-$145 to test, which is a much easier loss to accept.

Note that I've never purchased from any of these vendors, or even that website. I would be extremely cautious, sending several emails back and forth to make sure you know what you're getting before you order.
 
This thread is more than 5 years old.

Your message may be considered spam for the following reasons:

  1. This thread hasn't been active in some time. A new post in this thread might not contribute constructively to this discussion after so long.
If you wish to reply despite these issues, check the box below before replying.
Be aware that malicious compliance may result in more severe penalties.
Back
Top