AI generated Super Panavision 70 trailers

Alright, here's the deal.

1. Most people don't actually understand how AI works. They can't tell you the difference between machine learning, neural networks, or deep learning. To them, it's all just fancy buzzwords, and they see ChatGPT as a super-powered search engine (it's not -- please don't use it like that).

2. At the core of it, AI is a tool that recognizes and reproduces patterns. That's it. That's all it does. It's a very fancy, very powerful way of gathering information, sorting for patterns, and reproducing what it has determined is a pattern. AI is not creative. AI is derivative, and -- what's worse for artistic purposes -- AI has a tendency to genericize everything.

3. This is the reason why AI art looks both weirdly unreal, and very, very samey. That's because AI is, fundamentally, ill-suited to the actual creation of artistic works. Art requires creativity, and all AI does is spit back common denominators at you. The most creative that an AI gets is when it actually hallucinates and produces something that looks like it crawled out of an H.P. Lovecraft story.

4. The true problem with AI and the arts, and the real reason why AI is so controversial, is not simply that the stuff it produces looks like generic uncanny valley trash. It's that it's all based on copyright infringement. The way AI's "learn" (and I want to say that I am loath to use "alive" terminology to describe what AI actually does, because AI IS NOT ALIVE OR SENTIENT, but we're stuck with this terminology until people are more careful about it) is by inputting a bunch of data into the AI, from which it begins to build ability to recognize patterns. This is often referred to as "training data," which I think conveniently masks what's actually happening. The AI is described as "training" on the data, which makes it sound like the AI is sitting there, reading books, saying "Hmm. Yes, yes. Very interesting. I must remember that part. What an excellent example of metaphor..." But it's not. It's plugging this data into a ****ing computer program so that the program can build derivative works from it.

Anyone who works in a creative field, who makes their living off of their own creativity, should be absolutely infuriated at the notion that an AI might've been trained on the sweat of their brow, and they get nothing for it.

And in the end, it still looks like ass.
I agree with a lot of this. I've dabbled in deepfakes and the phrase "garbage in garbage out" definitely applies. Using AI/machine learning applications can be helpful as a tool, but I feel it should still be used as just a tool, not something that completely generates content. For example, there's still a lot of work required to make the deepfakes that I did - gathering and preparing usable footage & images for the GAN (Generative Advirsarial Network) to use, manually editing face markers and masks, editing individual frames after the images are generated, and post-production editing to tweak the results where needed. There's still a lot of human creativity required to makes something that looks realistic, and, admittedly, my own efforts still aren't 100% there. Also, I view my deepfakes more as digital makeup. Instead of using "AI" to create content, I'm using it as a tool to try to help with the "suspension of disbelief" required for viewing fictional works.

As far as the fully AI generated content goes being "copyright infringement," I agree, though the proponents and creators of these AI programs say that it's no different than a human learning to paint by copying the masters. The problem with that argument is that humans can't simply copy images in a mecahnical way like a computer can. Sure, we can use reference images for creating art, even trace images, but it's not automatic. Computers can copy and manipulate other people's works because it's all just ones and zeros to them. Humans learn and create differently. We have to train our minds and our bodies to create art. But it's also getting more and more difficult to say how much humans are creating art when we have computer programs that allow us to take shortcuts for what used to be tasks that be fully manual - any photo editing or graphic design is all computer based now. We used to have fewer specially trained/educated people that could do these things, now anyone can do it (some better than others, of course). For another example, major news outlets have eliminated photojournaists in favor of reporters using their phones to take pictures because the technology has advanced to where it's "good enough." Music creation tech has been eliminating more and more musicians from the process - digital instruments and loops have been replacing live musicians for decades. Autotune and other digital tools can help substandard singers sound better. You can even go back to advent of the microphone and amplified sound as a reason for why singers no longer need to be trained to project over a live band or orchestra. Tech designed to aid us ultimately has been eliminating the need for specially trained and talented people. AI goes even further.

Here's a link to my deepfake thread in case anyone is interested. There's also some simlar discussion of the merits of the tech:

 
As an addendum to what I just posted (sorry for the length), the real driving force for AI and any tech used in creative fields is commerce. Cutting down on the time it takes to create anything is simply to save money. Eliminating specialized positions in favor of less-skilled people using easy-to-use tech means companies can pay people less. Again, it's a world where "good" has been replaced by "good-enough." And most people don't know the difference.
 
The argument that "AI is no different from a human artist studying other art to influence their own" is not valid IMO.

It's like saying the human cloning operation in 'Attack of the Clones' is no different from two parents giving birth to a child.

The scale, cost, and impact of a technology are all critical factors when it comes to legal/regulatory decisions. Defending a concept on principle alone doesn't cut it.
 
Last edited:
Creepy level 10.
Child Smile GIF
 
As a super recognizer, the faces of known actors/actresses are "close" to the real humans. And I use the word "close" because something is missing.
Let me explain
: the AI looks at thousands of photos of a specific actor/actress and makes a mix of many poses/angles.
Of course, the poses/angles info is regurgitated by that AI and...it just "looks" like the actor/actress it is trying to re-create.

If, and that's a big one, the actor/actress would've been photographed, always, at the same poses/angles, then the AI likeness would've been exact.
But, as you know, it's an impossibility! AI cannot choose one photo and regurgitate that only photo (copyright problems)...it chooses thousands and by-pass the copyright. : As I said before about AI: close...but no cigars (pertaining to faces, of course)(n)(n)
 
As a super recognizer, the faces of known actors/actresses are "close" to the real humans. And I use the word "close" because something is missing.
Let me explain
: the AI looks at thousands of photos of a specific actor/actress and makes a mix of many poses/angles.
Of course, the poses/angles info is regurgitated by that AI and...it just "looks" like the actor/actress it is trying to re-create.

If, and that's a big one, the actor/actress would've been photographed, always, at the same poses/angles, then the AI likeness would've been exact.
But, as you know, it's an impossibility! AI cannot choose one photo and regurgitate that only photo (copyright problems)...it chooses thousands and by-pass the copyright. : As I said before about AI: close...but no cigars (pertaining to faces, of course)(n)(n)
Super Recognizer? Is that a Tron reference? :D
 
As a super recognizer, the faces of known actors/actresses are "close" to the real humans. And I use the word "close" because something is missing.
Let me explain
: the AI looks at thousands of photos of a specific actor/actress and makes a mix of many poses/angles.
Of course, the poses/angles info is regurgitated by that AI and...it just "looks" like the actor/actress it is trying to re-create.

If, and that's a big one, the actor/actress would've been photographed, always, at the same poses/angles, then the AI likeness would've been exact.
But, as you know, it's an impossibility! AI cannot choose one photo and regurgitate that only photo (copyright problems)...it chooses thousands and by-pass the copyright. : As I said before about AI: close...but no cigars (pertaining to faces, of course)(n)(n)
Another thing I've realized in my deepfake efforts is that reference images are also highly subject to camera distortion as well as lighting conditions. Deepfake AI software especially tries to blend those mixed images together, but if you have images that are too distorted by the type of lens and focus that the camera used or have completely different lighting conditions - for either the deepfake images or the original images you're trying to replace - it will result in the final product looking "off."
 
By the time 2066 rolls around, I think AI will be able to replicate a “new” season of TOS on its own—for the trek centennial.
 
Back
Top