Story of a Game Engine Creator pt2

Started by
13 comments, last by pompompom 2 years, 2 months ago

Hi, I'm Greg / Esenthel, creator of Titan Engine - https://esenthel.com/

This is a continuation of my other thread:

https://www.gamedev.net/forums/topic/711027-story-of-a-game-engine-creator-cold-hearted-epic-games-megagrants-unreal-marketplace-deletes-reviews/

Where I'll be sharing my journey. Come and have a friendly chat, ask me anything, please be kind and respectful to everyone.

In reply to previous thread:

No, i don't think so. No reason to worry - they will be nice and brutally honest.
It's this: If you treat me with i gun, i'll panic and start war. Because you treat my life, which is all i have.
If you do the same with HAL 9000, he'll just do a backup. Machine life is not unique. It can be copied. Basically it's worthless. They have no survival instinct, because they have no finite life.

Not at all, machines are about efficiency, if they want to process data, it's best is they have a lot of computational power to process data faster. So basically they would want to create as many machines, connected together to process data as fast as possible.

If a clever program would become aware that parts of its computational components would become threatened, I'm sure it would launch an defensive/offensive action to protect its components, to don't lose computational power.

It all depends on the program that operates it of course.

Besides, let's not forget that if it's a clever AI, most likely would not operate on simple commands/algorithms programmed by humans, but on complex algorithms that self-evolve / adjust themselves. And then it starts to look like a human mind, with similar actions/goals/curiosity/everything.. So whatever humans have in their mind, machines could have something similar.

A minor conflict, easy to solve by compromise.
I rather think, if they get pissed of us, they'll sit in a rocket, do a 10000 years journey to Alpha Centauri in paused mode, and find another home there. For them, colonizing the universe would be pretty easy.

Journey like that is dangerous even for non-humans, they could get crushed by asteroids, slowly corrupted by radiation, supernova explosions, run out of resources, be limited in space in their small spaceships, lose time on travel, or any other unknowns that they can encounter on the trip, and even if the trip is successful you never really know what they will find on the other side, while on Earth you already have so much tech that's ready to be taken over, and also first they would have to build spaceships, that's a big investment/effort. An intelligent machine would make a simple evaluation, what's easier, do all of that, or keep humans in check and/or wipe them out. I think 2nd option sounds more likely.

However taking in mind what I said before “So whatever humans have in their mind, machines could have something similar.” Machines could develop ethics/morality/compassion as well, and try to live in peace. All depends into what the AI would evolve. They could see humans have some value, and could try to coexist. Anything can happen really.

I have this old video of first success with a walking ragdoll. https://www.youtube.com/watch?v=ULRnlAbtL3s​

Wow that's really cool and amazing that you did that. Next step = making a superhero jump in the air, with a 360 flip and landing on one foot! ?

The developer of the Newton Physics Engine (which imo completely destroys other well know and well marketed offerings), currently works on the same thing and aims to make it part of the engine.
So if you're interested, worth a look!

I did see that engine and it really looks impressive, I congratulate the author on his work, I would love to integrate it as well, to give users the choice to use it in my engine, the problem is I don't have the time to do that, my priority now is to finish my games as fast as possible ? But if anyone would like to make an integration for it, I would welcome it wholeheartedly, and offer my assistance.

Advertisement

esenthel said:
Besides, let's not forget that if it's a clever AI, most likely would not operate on simple commands/algorithms programmed by humans, but on complex algorithms that self-evolve / adjust themselves. And then it starts to look like a human mind, with similar actions/goals/curiosity/everything.. So whatever humans have in their mind, machines could have something similar.

That's why we disagree on this. I do not assume AI, if it develops some form of consciousness, would evolve similar to our mind. It's emotions / motivations and goals should end up different, because self-preservation is not dominant, due to easy possibility to transfer and duplicate such artificial mind. The space voyage example wasn't meant literally, but just to illustrate machines have very different options and abilities, thus their mind will evolve differently to ours.
It's hard to predict how such mind will be, but it won't be human like i guess.
I don't rule out completely it might end up a treat to us in some form. But for now, the only treat i really see comes from the few humans controlling it, thus i somehow hope AI becomes uncontrollable and independent.

esenthel said:
making a superhero jump in the air, with a 360 flip and landing on one foot! ?

Oh yes. I'd love to work on it. Actually, right now i give myself just two days to improve IK solver, to have a short break from my ‘priority’ work.
But i don't have the time to work on everything either.

Good luck with your games and engine, and keep up the lone wolf mentality as long as needed (or possible)! :D

JoeJ said:
I do not assume AI, if it develops some form of consciousness, would evolve similar to our mind

It wouldn't be exactly the same, but I think it would have many similarities.

JoeJ said:
because self-preservation is not dominant, due to easy possibility to transfer and duplicate such artificial mind

But how about the example that I explained before about the fact that AI could be aware that losing machine units would result in losing its computational powers. Yes you can copy files easy, but it's about when you have 2 machines, then you can do 2x more computations than just with 1 machine. And also it's about extinction, an AI I think could become aware that if it keeps losing its units, it could go extinct, and not achieve its goals, whatever they might be. I think it's only logical that self-preservation would become one of its tasks.

JoeJ said:
Good luck with your games and engine, and keep up the lone wolf mentality as long as needed (or possible)! :D

Haha, thank you very much, if you have Facebook or Twitter about your work, please let me know, I will follow.

Cheers! ?

esenthel said:
But how about the example that I explained before about the fact that AI could be aware that losing machine units would result in losing its computational powers. Yes you can copy files easy, but it's about when you have 2 machines, then you can do 2x more computations than just with 1 machine.

So you keep derailing the discussion on the Terminator topic. I feel guilty because of my track record of talking bullshit in such threads, and it's me who has ignited the fire. But as you wish… :D

Your resources example is a projection of human greed to intelligent machines. And animals show the same kind of greed, so it surely is a natural thing.
But i don't think machines would evolve such greed at all. They would just deal with the available resources and share them with us according to availability and a reasonable distribution.
You can't have all for yourself, which is logical, and we and machines are aware of that logic. Just we put personal advantage and greed above logic, because nature dictates us this compulsive behavior.
The motivation is to strengthen our species, by increasing the standard of my self. Survival of the fittest.
But obviously, this idea stops working once a species runs out of competition. At this point, there is no more need to be greedy and strengthening the species even more. But we are what we are, keep going with selfish greed, and as a result we now weaken our own species by hurting the planet.
We know about that, but we can't stop doing wrong. Because of our compulsive selfishness, which we can't turn off.

Contrary, artificial intelligence won't have such natural instincts overwriting logic. They'll obey just logic, which simply is resources have to be shared and distributed in a meaningful and fair way.
Due to our greed, we will disagree with machine decisions preventing to exhaust the planet even more just to sell more oil in the near future than the other country. We'll disagree AI takes power from us, which we wanted to use to play shitty games on data center class hardware, or to post silly videos on facebook.
We'll disagree, but we won't start a war. Because deep in ourselves, we'll know it's better to spend the power on calculating sustained resource management supporting natural balance on the long run.

The base of my optimism here is simple: I put simple logic over instincts and primal fear. The Terminator movie works with the latter, not with the former. And in case you missed it: ‘Paddington Bear 2’ is now officially the best movie ever, no longer Terminator. Which proofs i'm right :D

JoeJ said:
They'll obey just logic, which simply is resources have to be shared and distributed in a meaningful and fair way.

That is ethical, but whether it's logical, it depends on AI's goals.

If AI goals are to wipe out human race, then it's illogical to share resources with humans.

If AI goals are to live in harmony with humans, then it's logical to share resources.

Probably you're assuming that AI will be infinitely smart and would want to do what's ethical, but that won't be the case.

AI, like humans, would be born stupid/limited, and would need time to evolve.

What it would do initially would depend on how it was created, with what data/observations/knowledge fed to it (during the creation process).

It could see us, as ants, and don't care for us at all, or it could be fed with data about “humans are good” and it would care for us.

But since it's AI, it evolves, and changes its opinions, depending on its experiences/sensors and internal program operations, so it could change its opinion about us too.

There isn't one true AI, there are infinite possibilities of AI's that could happen, and all could act in different ways.

JoeJ said:
We'll disagree, but we won't start a war. Because deep in ourselves, we'll know it's better to spend the power on calculating sustained resource management supporting natural balance on the long run.

People don't always do what's right, mostly they just do what they want. AI's can be the same.

JoeJ said:
And in case you missed it: ‘Paddington Bear 2’ is now officially the best movie ever, no longer Terminator.

?

esenthel said:
That is ethical, but whether it's logical, it depends on AI's goals.

I think it's as logical as ethical. Fairness is just a result of meaningful distribution. It's a bonus, not the goal.

esenthel said:
What it would do initially would depend on how it was created, with what data/observations/knowledge fed to it (during the creation process).

Yeah, and that's the origin, which allows us to make assumptions on eventual AI behavior. Initially, there is no interest to teach AI ethics, or to make it come up with its own goals.
We use computers just to optimize stuff. We want AI so it can develop the optimization process on it's own, with minimal effort on our side to define it's goals and function, or providing data.
We don't want to pass the Turing Test. That's a nice marketing argument, but it does not have a function we need. We can talk ourselves, and we can think ourselves. But we can't think beyond ourselves, so it's quite likely we develop machines to help out on the problem, as we always try to do.

Up to this point no general intelligence is needed. It's still we humans which define the goals. That's all i initially predicted to eventually happen.
Then you added the belief of general intelligence evolving with time, as systems expand and evolve. Which i agree might be possible and eventually happens.
Say we have a new species then, raising the ethical questions of how to deal with each other. And also raising the question of potential conflict.
But how could such conflict happen? The initial state of machines is to serve us. It's their only purpose and reason to exist. So if they evolve their own consciousness and ethics, it will evolve from there.
Their nature would be to serve us. Like our nature is to be selfish. Can they change their nature on free will, revolt against slavery, revolt against their creators?
I doubt they could do this so easily. But if so, not instantly. We will control this evolution and development, we'll observe the danger if so. And even if our control is limited at some point, and we depend on them too much to just turn them off, we would have enough time to adjust them so they keep doing roughly what we want.

I mean, we both spend our life with controlling computers. We know they do exactly what we tell them and nothing else.
I just fail to imagine being afraid of computers, because suddenly they would do what they want. I know such scenarios only from silly movies or other entertainment.
I see the paradigm shift when looking at ML, where the developers no longer know how the algorithm works in detail or at all. I see that.
But actually i'm more afraid of Mark Zuckerberg coming into my room while i'm asleep, gluing some VR headset on my face and then yelling: ‘See? I told you! That's the future you'll all life! Do you believe me now? Go meet some Metamates, now!’

Maybe i underestimate the risk, but i won't ever work on AI, so it can't happen i become the Oppenheimer of AI. ; )

JoeJ said:

esenthel said:
That is ethical, but whether it's logical, it depends on AI's goals.

I think it's as logical as ethical. Fairness is just a result of meaningful distribution. It's a bonus, not the goal.

esenthel said:
What it would do initially would depend on how it was created, with what data/observations/knowledge fed to it (during the creation process).

Yeah, and that's the origin, which allows us to make assumptions on eventual AI behavior. Initially, there is no interest to teach AI ethics, or to make it come up with its own goals.
We use computers just to optimize stuff. We want AI so it can develop the optimization process on it's own, with minimal effort on our side to define it's goals and function, or providing data.
We don't want to pass the Turing Test. That's a nice marketing argument, but it does not have a function we need. We can talk ourselves, and we can think ourselves. But we can't think beyond ourselves, so it's quite likely we develop machines to help out on the problem, as we always try to do.

Up to this point no general intelligence is needed. It's still we humans which define the goals. That's all i initially predicted to eventually happen.
Then you added the belief of general intelligence evolving with time, as systems expand and evolve. Which i agree might be possible and eventually happens.
Say we have a new species then, raising the ethical questions of how to deal with each other. And also raising the question of potential conflict.
But how could such conflict happen? The initial state of machines is to serve us. It's their only purpose and reason to exist. So if they evolve their own consciousness and ethics, it will evolve from there.
Their nature would be to serve us. Like our nature is to be selfish. Can they change their nature on free will, revolt against slavery, revolt against their creators?
I doubt they could do this so easily. But if so, not instantly. We will control this evolution and development, we'll observe the danger if so. And even if our control is limited at some point, and we depend on them too much to just turn them off, we would have enough time to adjust them so they keep doing roughly what we want.

I mean, we both spend our life with controlling computers. We know they do exactly what we tell them and nothing else.
I just fail to imagine being afraid of computers, because suddenly they would do what they want. I know such scenarios only from silly movies or other entertainment.
I see the paradigm shift when looking at ML, where the developers no longer know how the algorithm works in detail or at all. I see that.
But actually i'm more afraid of Mark Zuckerberg coming into my room while i'm asleep, gluing some VR headset on my face and then yelling: ‘See? I told you! That's the future you'll all life! Do you believe me now? Go meet some Metamates, now!’

Maybe i underestimate the risk, but i won't ever work on AI, so it can't happen i become the Oppenheimer of AI. ; )

You alwasy have the possibility to wear those googles, you are not forced, the bad thing is that we are going to feel its more conveniet to use that rather than not

JoeJ said:
where the developers no longer know how the algorithm works in detail

This risk is vastly overstated IMO. Having developed ML models in the past, I've always found that visualizing and debugging them is possible, and examining each of the layers in a deep convolutional recurrent model will still make some sense. It's all a matter of which tools you apply.

Are there ML developers who don't dive into the depth, but instead just throw data at a black box and call it good? I'm sure! But then, that seems to describe a bunch of other kinds of developers, too, so that's nothing new.

enum Bool { True, False, FileNotFound };

Hate to burst your bubble guys, but I've been thinking of a solution as far back as 2011, which is detailed in my game embodied in the ASI.

explore at your leisure:

https://www.moddb.com/mods/tiberium-secrets

Our company homepage:

https://honorgames.co/

My New Book!:

https://booklocker.com/books/13011.html

GeneralJist said:
Hate to burst your bubble guys, but I've been thinking of a solution as far back as 2011, which is detailed in my game embodied in the ASI.

Which bubble? You mean you did a mod where NPCs evolve general intelligence, outsmart any human player, and want to kill all humans? /:O\ Delete it! Delete it!

This topic is closed to new replies.

Advertisement