Una de las mejores notas que he leido ultimamente en Tom’s Hardware, donde analizan que tanto beneficia el agregar una placa Nvidia para el proceso de físicas con motor PhysX, junto a una placa AMD, para mejorar la experiencia de juegos. Excelente que hayan tenido en cuenta el impacto que produce el agregar placas de rango bajo y medio… y les adelanto que para bien de nuestros bolsillos, una mínima 240 es una opción a considerar!
by Igor Wallossek
Table of contents
- 1 –Introduction
- 2 –Test System
- 3 –CPU-Based PhysX: Relevance
- 4 –CPU PhysX: The x87 Story
- 5 –CPU PhysX: Multi-Threading?
- 6 –How To: His Majesty, Radeon The Fifth, And The PhysX Squire, GeForce
- 7 –GPU PhysX: Roundup: Free For All
- 8 –GPU PhysX: What Card Is Best?
- 9 –Summary And Conclusion
Rarely does an issue divide the gaming community like PhysX has. We go deep into explaining CPU- and GPU-based PhysX processing, run PhysX with a Radeon card from AMD, and put some of today's most misleading headlines about PhysX under our microscope.
The history and development of game physics is often compared to that of the motion picture. The comparison might be a bit exaggerated and arrogant, but there’s some truth to it. As 3D graphics have evolved to almost photo-realistic levels, the lack of truly realistic and dynamic environments is becoming increasingly noticeable. The better the games look, the more jarring they seem from their lack of realistic animations and movements.
When comparing early VGA games with today's popular titles, it’s amazing how far we’ve come in 20 to 25 years. Instead of animated pixel sprites, we now measure graphics quality by looking at breathtaking natural occurrences like water, reflections, fog, smoke, and their movement and animation. Since all of these things are based on highly complex calculations, most game developers use so-called physics engines with prefabricated libraries containing, for example, character animations (ragdoll effects) or complex movements (vehicles, falling objects, water, and so on).
Of course, PhysX is not the only physics engine. Up until now, Havok has been used in many more games. But while both the 2008 edition Havok engine and the PhysX engine offer support for CPU-based physics calculations, PhysX is the only established platform in the game sector with support for faster GPU-based calculations as well.
This is where our current dilemma begins. There is only one official way to take advantage of PhysX (with Nvidia-based graphics cards) but two GPU manufacturers. This creates a potential for conflict, or at least enough for a bunch of press releases and headlines. Like the rest of the gaming community, we’re hoping that things pan out into open standards and sensible solutions. But as long as the gaming industry is stuck with the current situation, we simply have to make the most of what’s supported universally by publishers: CPU-based physics.
Preface
Why did we write this article? You might see warring news and articles on this topic, but we want to shine some light on the details of recent developments, especially for those without any knowledge of programming. Therefore, we will have to simplify and skip a few things. On the following pages, we’ll inquire whether and to what extent Nvidia is probably limiting PhysX CPU performance in favor of its own GPU-powered solutions, whether CPU-based PhysX is multi-thread-capable (which would make it competitive), and finally whether all physics calculations really can be implemented on GPU-based PhysX as easily and with as many benefits as Nvidia claims.
Additionally, we will describe how to enable a clever tweak that lets users with AMD graphics cards use Nvidia-based secondary boards as dedicated PhysX cards. We are interested in the best combination of different cards and what slots to use for each of them.
Fuente y nota completa: http://www.tomshardware.com/reviews/nvidia-physx-hack-amd-radeon,2764.html
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.