Jump to content

St. Goliath

Member
  • Posts

    138
  • Joined

  • Last visited

Everything posted by St. Goliath

  1. I don't know what you call fuzzy-logic, but when I think of the control systems I was forced to design, it boils down to simple interpolation. Regarding neural networks, a neuron basically does weighted addition where the weighting factors can be adjusted using some form of feedback. Both things are computable, i.e. if you know its current state, you can determine pretty well what wil happen next; you can simulate it with a Turing machine and there we have our algorithms, our arithmetic operations, branches and loops. =P (unless you prefere functional programming languages) I mentioned that above, which makes the whole discussion even more interessting. If you look at the whole thing as huge networks of neurons and look at how you actually percieve the world around you rises the question of what conscious and existence actually mean. Something we won't solve here. Yes, people attribute human characteristic to lots of things, especially if you don't know how it works (that's why I used the term "magic), but if you know that you are dealing with a machine that implements "consciousness" as a series of calculations, written down by somebody, will you say it has feelings or call it conscious? I think that even if people attribute human characteristics to robots, that though will come to their minds when dealing with human like machines (->uncanny valley), amplified of course by chauvinism and xenophobia.
  2. My post above (that apparently no one read) basically boils down to the point that machines like the computer you're using right now are not magic. It's circuitry and propperly executed algorithms ("mathematical cooking recipies"). Loads of them. If such robots are built, they won't be magic either. They are designed and made up in the exact same way and people will refuse to call them conscious or say that they have feelings or emotions, constantly thinking that they missed the goal of designing such a robot, because it's just algorithms.
  3. uu7jA8EHi_0 Just came across this awesome experiment from MIT Game Labs while browsing Boingboing. It's an experimental game to illustrate relativity by lowering the speed of light to human walking speed. Basically one has to collect objects in the game and the more objects you collect, the slower the speed of light gets, making it increasingly difficult. (It gets quite psychedelic) I thought you might be interessted and posted this here (in the correct forum I hope^^). I personally like the idea of toying with such concepts and getting them into game play mechanics and would like to hear (read) some oppinions on that. The video above contains some explanations and some gameplay footage. You can download the whole thing from the project homepage here.
  4. A major problem here is how one defines "consciousness". It overlapps with the main issue of the field "artificial inteligence". The term AI is mainly associated with unsolved problems in computer science. As soon as a problem is solved, there is an algorithm, people refuse to call it inteligent and it is attributed to another field. (e.g. things like compiler optimisation used to be AI problems; path finding in computer games is technically not an AI problem either). So, on the one hand, if you were an engineer and designed such a system, you would know how it works down to the bottom of it. You would know, that there is an array of sensors that delivers data, which is fed through algorithms (perhaps embedded in an FPGA or loads of arithmetics, loops, branches, etc. in software) that then assigns the term "pain" to it if it exceeds a certain threshold. It's a sequence of calculations. Would you call that inteligent? An emotion? Consciousness? On the other hand, if you analyze the human brain, it basically boils down to the same thing, yet you experience what you are experiencing right now. You start asking yourself how that is even possible, or how other living entities percieve the world around them, or even those machines? Yet to realize, that you can and never will know, because "you" yourself are locked into that system of neurons in your head.
  5. Surviving probation with the company building still in tact.
  6. Producing hydrogen is currently not as energy intensive, because industrial hydrogen is produced from oil as well. Basicly it works through ripping the hydrogen off and releasing the remaining carbon as CO2, so as a fuel for cars it would be just the same as burning oil, just less efficient. On the other hand, getting hydrogen from from water through electrolysis is also very energy intensive. Yes the atmosphere is pretty thin and airpressure is lower as on earth, but it shoould be breathable(?). AFAIK scientists don't yet agree on why Mars has such a thin atmosphere. A common theory I knwo is that was basicly blown away by solar winds, as Mars has no magnetic field to shield them off, like earth. Other theories include, of course, asteroid impacts, et cetera. But even if you had a breathable atmosphere, you would still have temperatures like on the south pole. It might be an intermediate solution for a smoother transition to newer technology, but certainly nothing longterm. In the long run, we are probably better off using the power from fusion more efficiently for transportation, instead of using the most energy demanding way possible (excluding short distance teleportation). Also, if it was a closed circle it would keep the global CO2 level constant, instead of reducing it, which would be better, as the status quo already includes a lot of damage to the environment through raised temperatures. But then again, we would have to be carefull not to get all the CO2 sucked out of the air through plants, causing a global cooling effect^^.
  7. Finaly tiding up my room. Got some workspace now; can't breath covered in dust all over.
  8. Interesting that somebody actually produced a device that ripps the Carbon of Carbondioxide and the Hydrogen of Water and pieces together longer chains (altough I doubt that it has never been done in a lab before). The idea however is not new and has been dismissed various times as very energy intensive. You have to put way more energy (several orders of magnitude) into the production of a liter of petrol, than you can ever gain from burning it again. It would probably be easier to produce petrol out of plastics, but that stuff floats around in the pacific, not in the air . It wouldn't really create a petrol circle, because we would have to put loads of electrical energy into it to keep it going. In the long run, it's probably a better idea to overthink transportation in general and use those power plants we would build for petrol-out-of-thin-air-factories to power homes. The whole thing reminds me of a conversation I had with my dad about two months ago, about teraforming Mars. It was something like "...the atmosphere on Mars is 95% Carbondioxide.... I guess if they every build a large scale device for turning Carbondioxide into Carbon and Oxygen, they would probably come up with the idea of transporting the Carbon back to earth to fuel power plants here..."
  9. Well, at least the police will know who to question first. (Just out of curiosity, why is it "VERY possible" that one of them disappears?) Aside from that, I can't really help tough, I'm a programmer, not an artist. (Sorry)
  10. DC56
  11. A battery powered, portable tesla coil. Well nothing new, but I've never seen one that size before. I would not exactly call it a gun tough and you won't be able to actually cause any damage with it. Because of the high frequency and the small current, the worst you can get from a battery powered tesla coil is a minor skin burn at the "entry"point. His inventions are already omnipresent in our everyday lives, be it the AC power grid, radio, x-ray, radar or other things. See here for more.
  12. Adding more cores doesn't magically double performance. What a processor does is fetching an instruction and then doing something (e.g. loading a data word from RAM into a register or adding two registers together). Multiple cores means that a program can get the CPU to fetch and execute two (or more) different instructions and executing them in parallel (i.e. at the same time), so a program can boost performance by processing data in parallel if and only if the algorithm in question can process data elements independend of each other. There are problems that can't be parallelized (there's a saying some professors use at that point, something like "Bearing a child will always take 9 months, no matter how many women are involved"). The particular friend works at a company that develops embedded systems. When comparing options for CPUs, he decided (for the fun of it) to compare a single i7 core (fetch and execute instructions one by one, only one ALU, et cetera) with a Pentium IV (single core, one ALU, et cetera) at roughly the same clock speed. But of course, the i7 can be faster in a real world desktop application, because the OS can run several applications, independandly on different cores with real parallelism and it will probably need less energy, as the OS can turn unneded cores down when only one application runs that doesn't utilize the other cores. However, there is room for improvement, as the instruction decoding eats up more clock cycles, making the individual core slower as extensions are added. Yes, but the gameconsoles and Apple computers are a little different from the PC market, they are closed platforms, there is only one vendor for each device that has complete control and the devices are nowhere near compatible to each other. There are however a lot of PC hardware vendors that develop a single, open platform and software will run accross various hardware combinations. If somebody comes up with a new device, it will fade out quickly, as the desktop computer owners will stick with their PCs, as their beloved Windows, as well as an arsenal of proprietary applications they use, won't run on a different platform. On the hardware side, those things I said above, on the software side, it can be way cleaner when we have an adequate, planned core instruction set instead of all those hacky extensions. Of course, yet another architecture won't do any good, what would was a transiation to it.
  13. If you depend on software from back then, it is good. The problem however is that the modern x86 CPUs are bloated with hacky extensions. There are e.g. three different syscall instructions and you have to fiddle around with some status bits to find out which to use; There are about 10 SIMD extensions(!). Some instructions need more and more clock cycles to execute because the decoding gets way more complicated, which is compensated by internal Harvard-style chaching and branch prediction to get actual speed gains; It only acts like a Von-Neumann machine for the sake of backwards comatibillity. A friend of mine had a case at his company where a Pentium 4 outperformed an i7 (about the same clock speed, only one core used on the i7). IMO the backwards compatibillity is getting (or is already) in the way of progress. IMO we should design a whole new CPU architecture from scratch, standardized and governed by a consortium of vendors (like OpenGL), to avoid the extension chaos and bloat. But there are two things that keep a new architecture from taking off. For one, the Windows dominance and proprietary software in general are in the way of it. To get MS and application developers to port to a new architecture you need to convince them with a huge number of users. It can't be ported by others, because it's proprietary. For the same reason you won't get hardware drivers/hardware support. Both turns possible user away from that platform. Intel already tried that themselves (remember Itanium?). And there was already something like that consortium thing (remember PowerPC?) The second reason is if you don't have a lot of users, nobody will invest money in your architecture ($2M minimum to get ASIC production started) and you can't match the development and manufacturing of Intel and thus won't even be near their performance. Yes, because PhysX is not run on the ATI card but uses a CPU fallback instead, like I said above. See the PhysX FAQ.
  14. To brighten up a few things: When IBM decided to build their own desktop computer, the "IBM PC", they decided to use the Intel 8088 processor (basically a cheaper version of the the Intel 8086) and later switched to the Intel 80286 (an extended version of the 8086). Also, in 1980 they decided to sign up with Microsoft to develop an operating system "MS-DOS" which they rebranded "PC-DOS" For various reasons, the IBM PC was a success, despite the crappy architecture and OS in comparison of other machines at that time. One reason may be the good reputation of IBM; however IBM encuraged other companies to sell licensed clones of the IBM PC and other companies, like AMD, also started to manufacture their own 80x68 processor clones with their own extensions, which where then copied by others and so on (e.g. AMD made the original 64 bit extension to the x86 architecture). What you are now most likely sitting in front of (except if you are using a "smart" phone or other embedded mobile device) is a heavily modified/evolved version of the IBM PC with an x86 based processor with a horrible mess of a patched up architecture with hundreds of extensions, backward compatible to 1979. The dominance of the Wintel platform (Windows + Intel) on the desktop and the Windows monopoly have secured that it remains that way and the x86 processor gets patched and extended all over. My personal hope is that, with the shift towards mobile platforms with newer architectures and the upcoming WinDOS 8 disaster, we might someday get rid of that stonaged abomination they call "x86 architecture". There's a reason the orignal Ageia PhysX PPUs flopped. People don't buy extra hardware that no software utilizes and software developers don't develop software that utilizes hardware nobody has. Nvidia graphics cards don't have a PPU on them. Nvidia acquired Ageia, but only uses the PhysX API (Application Programing Interface) and implements it with GPGPU (i.e. program running on the GPU), so it could theoretically run with GPUs from other vendors (e.g. AMD) as well, but Nvidia forces PhysX to their platform by making it refuse to work on other GPUs (and it is probably optimized for their GPUs only as well), so developers using PhysX would either lock their software on Nvidia GPUs (or good ol' way slower CPU physics on others), or have to take the extra effort of using two physics engines with an abstraction layer in between. But there are other physics engines that use GPGPU, like Bullet, which is even free (as in free speech not free beer; i.e. no restrictions, aka "Open Source"), so why use PhysX?
  15. Obviously the G-man works for the railway organisation. Why else should he wear that ugly blue suit and stick around inside trains all the time? Also at the beginning of HL 2, the train stopps after he disappears towards the front of the train. Obviously he was operating the train.
  16. FIrst day CS at the University of Innsbruck. Spent several hours in a small lecture hall crowded with theoretical physics, chemistry and maths students without having a seizure (aka a good start). Quote of the day from a woman sitting behind me: "When studying maths you have to calculate lots of stuff and such...."
  17. Yeah, in realistic scenarious, you can't really shoot anything under water (harpoon maybe?). In deeper water (e.g. sea) you also can't really see that far which would make exploring under water quite a pygalgia and the scenery quite boring. What adds to that is that movement under water without special equipment is rather slow. I too think that under water areas do not really fit into FPS games. Personally, I fount the under water sections in HL 1 quite annoying, especially since you run out of oxygen all the time and hat to return to the surface which makes exploring quite annoying. On the other hand, I really liked the under water sections in the Indiana Jones (TM,$$) games I used to play on my brothers PS 1 and 2 some years ago. Under water sections are IMO a great conecept for puzzle games.
  18. Banned for banning random people for no reason whatsoever
  19. Indeed, I'm searching for an easy way to programmatically detect those rooms (to avoid doing it manually). AFAIK, the map file formats store a visibillity tree that gives information on which BSP leaves can be seen from an other BSP leave. Because those big empty rooms do not connect to anything, they should not be visible from anywhere. Culling that might be a little more difficult tough, as the big rooms could be split up themselves to nodes that "see" each other and circular visibilities will probably be at other areas of the map too, but in contrast to other areas, the big rooms will generate single isolated circles, not pointing to other circles, I guess. How about rendering one map (e.g. all even numbered maps) into the sctencil buffer and then, using stencil test, cull away overlapping fragments when rendering the other map (e.g. all odd numbered maps)? Which OpenGL version are you using in your renderer?
  20. Moving to Poland was (afaik) an april fools joke, moving to LA was to work directly at the Machinima hq, later moving back, as working at machinma caused loads of work hours and plunging productivity at the same time (as far as I can recall).
  21. I just zapped by some documentary on TV (BR-alpha). Apparently there were buddistic monks somewhere in Tibet who actually did carve artificial caves way up into a clif and built monasteries there some time ago, but most of them got destroyed during the cultural revolution. I guess that solves some parts of the question about cliff dwelling societies.
  22. I saw some gmod addon do something like that a few years ago. Of course the name is horribly hard to guess: Of course it is way simpler than what L4D2 does. The addon just scales the limbs down, so you can't see them anymore and adds a blood spray particle effect. That plus spawning gibbs accordingly should do it. Seen as tough it requires modification of the code and BM:S has afaik no scripting system like gmod, it is pretty impossible to implement by yourself (unless you can fluently read and write x86 machine code, or get the devs to give you the source). According to the Valve presentation Rendering Wounds in Left 4 Dead 2 the more sophisticated approach used in L4D2 clips the model geometry with an elipsoid in the vertex shader and does fine grained clipping by alphatesting against a pre-baked texture in the fragment/pixel shader and then renders internal body parts into the wounds. And again, this needs access to the source. Sorry that I can't give you any hopes on this end, but at least you now know two possible ways of implementing it.
  23. For technical reasons, I'd prefere Nvidia cards. Their OpenGL implementation is always on the cutting edge (newest version & interesting extensions), way less buggy and extremely error tolerant, whereas AMD/ATI sticks closely to the standard and the official ATI drivers tend to have loads of bugs (I don't know about Direct3D differences as I use Windows about once per month) and, in my experience from GPGPU applications, Nvidia cards tend to have higher data throughput. For political reasons, however, I'd perefere AMD/ATI cards. They release the specs of their cards what makes it possible to develope free drivers (free as in freedom), whereas Nvidia releases binary only drivers and keeps the hardware interface a big secret, or in Linus' words, they are the single worst company on that regard. As a developer, I am currently on the look for an AMD/ATI card to replace my GTX 285, because of their straight standard (and sometimes buggy) OpenGL implementation. In my experience, if something runs on an AMD card as expected, it is practically guaranteed to also run on other cards flawlessly. (Also, the GTX 285 is very power hungry and already ruined a series of 800W PSUs) [On a side note, I would like to choke the person responsible for Nvidia Optimus]
  24. @deepika: If you aren't a bot, you must be the one human that fails the Turing Test. [edit]: Okay it really is a bot... [edit 2]: For all you future people out there who are puzzled about my post: At the time of writing, some spambot posted random junk in several threads; One of those posts used to be above mine.
  25. A propper version controll system might be more appropriate. Ordinarily I would suggest git, but that might be a little hard to use in a WinDOS only environment, so I would recomend the somewhat simmilar Mercurial. Please don't count me in though, but I could probably do some programming works if required. (However, the last time I peaked at the source engine was in 2007) Also, if that helps anybody, I have experience in micro electronics, speak german and can do some basic cooking^^.
×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.