Page 3 of 4
Re: TEOTWAWKI
Posted: Thu Feb 23, 2012 4:29 pm
by Red Devil
speaking of environment, what's yours like over there lately, MTS?
Re: TEOTWAWKI
Posted: Thu Feb 23, 2012 11:30 pm
by MrTwosheds
That's a fairly wide ranging question.
In general, biodiversity is on the decline, Probably still losing about 100 species per year. Water shortages are the norm, Obvious pollution is declining along with our industrial capability's, infrastructure is in decline due to very long term under investment. Marine pollution is constantly increasing and marine environment is still being damaged by irresponsible exploitation, thus fish stocks are also in decline. New average temperature records are broken every year.
Bankers doing very very well indeed, despite losing allot of other peoples money. "Legalised" corruption is as endemic as it always has been. Politicians still suffering from a chronic honesty deficiency. Farmers still regularly going bankrupt, just as if this wasn't actually a perfectly clear enough indication that there is something insanely wrong with our economy. All of our nations ills are the fault of the poor, immigrants and young people obviously...SNAFU is the accepted term I believe.
Re: TEOTWAWKI
Posted: Fri Feb 24, 2012 12:39 am
by Red Devil
so, other than that?
Re: TEOTWAWKI
Posted: Fri Feb 24, 2012 6:17 am
by Zax
Every thread has to become political, yikes. Not surprising.
Re: TEOTWAWKI
Posted: Fri Feb 24, 2012 7:23 am
by Zero Angel
Red Devil wrote:so, other than that?
Pretty good, I would imagine.
Re: TEOTWAWKI
Posted: Fri Feb 24, 2012 6:41 pm
by Psychedelic Rhino
MrTwosheds wrote:Currently I think our AI development is in its equivalent of our devonian period, got some bacterium, plants and some promising looking molluscs...Not one of them capable of holding a pen, let alone attempting the works of Shakespeare.
I too believe we are at the beginning regarding AI, and true nano and genetic manipulation. I feel we are more like we were with networking and computing in the early 1970's.
The big difference now, we have support technologies that will greatly accelerate the progress. We are also moving away from the old mindset of what AI
should be. The 'Turing Test' and its anthropomorphic fallacies are losing ground to more insightful ideas of what intelligence is or
needs to be.
Another aspect that will accelerate AI is the huge advantages it has over chemical-based intelligence systems such as ourselves. Our brains operate at chemical reaction speeds on the order of a few dozen meters per second. Whereas AI will most likely operate at light speed or close...if not millions of times faster, certainly hundreds of thousands. Add virtually unlimited storage and the wild card of esoteric quantum computing, the ability to consciously self-program, it will likely snap away from our comprehension.
I think it will be analogous to balancing a flat playing card on its edge. Pushing the card gently forward with it resting on its edge. We are now nudging gently, wanting and believing we can have the card remain on its edge. . .the balance being a highly beneficial realm of AI for our use. Solving our age-old problems of the environment, physics, medicine, politics, philosophy. However, that edge is extremely precarious. Once it falls forward, out of balance, it establishes itself in weeks or days, maybe hours, exploding with the subjective IQ equivalence in the millions. It foresees
all our lumbering 'moves' at re-balance in a few seconds. As a matter of fact, possibly only a micro subset of its capability is all that is needed to foresee and counter all our ideas of recovery and control. Like a chess game where it has run a billion scenarios in a few seconds.
That's exactly why it's been called the Singularity, once the tipping point is achieved, no human can predict what the scenario will be.

Re: TEOTWAWKI
Posted: Fri Feb 24, 2012 10:49 pm
by MrTwosheds
Every thread has to become political, yikes. Not surprising.
Some of us do not consider "the Environment" to be political, ultimately it is not a matter of choice, it is our life support system.
Those who do not wish to preserve it, for whatever reason, should be given the opportunity to try and survive without its gifts.
That's exactly why it's been called the Singularity, once the tipping point is achieved, no human can predict what the scenario will be.
Well generally we can actually. This planet is already populated with thousands of intelligent species, as Humans we choose to
not recognise this. Unless the AI concerned is very firmly cast and controlled as a "human type intelligence" it will invariably, like nearly all the others do, perceive us as hostile scary monsters and behave in a manner most likely to ensure its own survival.
Re: TEOTWAWKI
Posted: Sat Feb 25, 2012 1:08 am
by Psychedelic Rhino
I truly cannot predict what a super intelligence might do. As I mentioned earlier, Greg Egan specializes in this in several of his novels. As well
Verner Vinge.
Vinge is much more of a proponent in a
benevolent super AI. As I've said before, those that discuss the idea. . . individuals, groups or committees, have maybe average IQ's of around 140, if they're lucky. Speculation of an AI, with a comparative IQ of hundreds of thousands AND can implement its discoveries and ideas almost as quick, is simply too far beyond the pale to consider with confidence.
What I find amusing is the pop media concerning alien visitation or UFOs. Basically discussing advanced alien intelligence, when super AI is almost certain to be with us in a few decades. Yes, of course, it can be argued that man has a hold on sentience, but we are learning every day there are lower mammals that can learn much of what we thought was 'ours'.
Re: TEOTWAWKI
Posted: Sat Feb 25, 2012 1:37 am
by MrTwosheds
What I am trying to point out, is that the type of intelligence used/evolved by a being is very much dependent on its physical form. Just forget about a super intelligent sentient computer, it would be absolutely nothing like a human or any other animal. It would be brain in a box and have nothing in common with us at all. Something more like star treks Data, would be required to even begin to develop a human like Intelligence, but even then endowing it with super intelligence would also cause it to be extremely different.
Its difficult to imagine what a human raised by beetles would be like, but this is the situation such an AI would find itself in.
It would probably not be happy to dedicate itself to improving the lot of beetle kind.
In order to end up with something sane and well disposed to us, you would probably need to try and reproduce as much human experience as you can, this would probably involve not being super intelligent or fast from the point that you turned it on. It would need a childhood in order to learn all sorts of stuff we take for granted, indeed far more than a human needs this, much of what makes us what we are is coded in via our genetics.
As far as I can see, the sensible way for humanity to advance is by enhancing ourselves, not making monsters in the hope that they like us. We don't even like ourselves very much...
Re: TEOTWAWKI
Posted: Sat Feb 25, 2012 1:56 am
by Axeminister
We have 6 billion human neighbors on this planet. Why spend a lot of money, time, thought and effort to build someone else to talk to when we have starving people and other things, those utilities could help our neighbors. Especially don't build someone that might learn faster, surpass us intellectually and ultimately shed us from their life like so much baggage. If we wanted to play God, we could create something that we drop off on a nearby planet that was specifically built to thrive in the planets' atmosphere/terrain and just watch it grow, but not here on earth where it could cause us harm. We could watch it and maybe learn new adaptive processes by watching it. But don't let another being walk among us that we create, it would totally be too dangerous. We would have to cast it out of society at some point, it would be in evitable. I believe there is a story in a good book about this sort of thing for a reason.
Re: TEOTWAWKI
Posted: Sat Feb 25, 2012 2:43 am
by MrTwosheds
Well I don't think we shouldn't try, but we do need to be very careful about what we do and recognise that there is allot more to intelligence than just having a big brain. We also need to recognise that a human type intelligence is just one form out of many possibilities and that it is not the just obvious result of high intelligence.
Anyone who has ever had much interaction with elephants, should have experienced the feeling of doubt as to their own superiority. Elephants are much smarter than humans, but it is expressed in a way that is only useful to an elephant, had they evolved hands, we would still be hiding in the trees...
Re: TEOTWAWKI
Posted: Sat Feb 25, 2012 2:52 am
by Psychedelic Rhino
Axeminister wrote:Especially don't build someone that might learn faster, surpass us intellectually and ultimately shed us from their life like so much baggage.

Re: TEOTWAWKI
Posted: Sun Feb 26, 2012 2:55 am
by Red Devil
as much as some of you guys might like to focus on AI, the more realistic dangers facing us are:
- the ever-increasing human population coupled with
- the current global economic meltdown coupled with
- the ever-decreasing fauna and flora population coupled with
- the ever-growing radical Islamic and socialist population and governments coupled with
- the spread of nuclear arms coupled with
- the degeneration of morality coupled with
- the ever-decreasing finite material and energy resources coupled with
- the infinite capacity for people to do the stupidest thing at the worst possible moment
Any one of those eventually leads to lots and lots of war, famine, pestilence, and disease. Have them all happening at the same time - as they are now - and we're on a one-way ride to pain.
Compared with that scenario, some advances in computer AI seems insignificant and more of a way to just ignore the real issues of the times we live in.
Re: TEOTWAWKI
Posted: Sun Feb 26, 2012 3:47 am
by Red Devil
Re: TEOTWAWKI
Posted: Sun Feb 26, 2012 4:07 am
by Psychedelic Rhino
Red Devil wrote:as much as some of you guys might like to focus on AI, the more realistic dangers facing us . . .
Compared with that scenario, some advances in computer AI seems insignificant and more of a way to just ignore the real issues of the times we live in.
The novel aspect of the Singularity versus the problems you list, is the majority of mankind may not realize it directly, but wants AI, is indirectly pursuing AI and is eager for it to arrive as soon as possible. The problems you list are considered by the majority of mankind to be a bane on society.