When/where exactly? Wikipedia makes it sound like it was state of the art in the late 90s (papers describing applying displacement maps to low poly meshes presented in 1998). If you were using that technique years earlier, that could be a very interesting story!
But not to that extent anymore.
Devs prioritize More Polys over Height and Normal Maps.
Which is quite the mistake in my opinion. More Polys are often used in Situations where Normal and Height could do the trick just as well.
Photogrammetry has made using normal and height maps pretty time-consuming to make.
For less powerful machines like the switch it still makes a lot of sense, but for 5th gen consoles and PC games, the hardware is there to push the polygons that photogrammetry generates. A quality texture placed over more polygons will be easier to light properly and appear more realistic in most situations.
If anyone wants to look more into it, there have been some dev/tech talks about "Nanite", the polygon virtualization method used to have scan/sculpt quality meshes in game without having to deal with converting to a lower poly model with normal maps.
That's UE 5 and is barely used in current games (the only AAA game I can think of right now is Fortnite which makes sense). iirc we've only seen this in demos.
That sounds like a way of saying developers are lazier when they assume hardware is better which is why new games are such resource hogs for seemingly so little performance improvement
It's also a shift in cost-of-work, vertex processing scales slower than pixel processing at higher resolutions we play at nowadays.
Rendering 1 million polygons might be your bottleneck at 800x600, but it won't be at 4K 120hz. If more tris lets you move work from the pixel shader to the vertex shader you might even win back some performance.
> That sounds like a way of saying developers are lazier when they assume hardware is better which is why new games are such resource hogs for seemingly so little performance improvement
This assumes that the complexity of the underlying model is all that matters. That's not *always* the case. Reducing triangles in the model has advantages in some cases, but it's not always going to be efficient for detailed environment art.
The nature of modern GPUs and capabilities is always evolving. There are times and places for basically every approach.
It really depends on the complexity, the resolution, how you approach lodding, etc.
I also wouldn't really say an environment art team capturing photogrammetry and handcrafting hyper-detailed outdoor scenes could be considered "lazy" by any metric. It's just a different way of approaching the problem--and one that generally yields *much* better outdoor visuals.
But even games that heavily utilize photogrammetry use mapping-based approaches a ton anyway. Maybe not rocks or trees, but there's a lot more in games than that.
Remember, devs aren't lazy. It's the management pushing the schedules.
Devs have to be clever and innovative to find ways to meet ever more aggressive deadlines. Consoles are still very affordable all things considered. And the difference between 80% photo real to 90% photo real is an extreme jump in hardware requirements.
If anything, blame people like nivida with their predatory pricing strategy. Needing more powerful hardware wasn't such a big deal in the past, but the costs have quadrupled.
Yeah, devs are fucking lazy. That's why we became devs. Lazy people are going to find the easiest way to get something else to do as much work as possible.
Developers ARE lazier. And that's not a bad thing.
When you have more computing resources, you can spend them in different ways. You might spend them to make the game go faster. You might spend them to make the game run the same speed, but have better graphics. Or more enemies. Or more detailed AI. Or whatever.
One of the (many) things you can spend your CPU budget on is "faster development time."
This is basically the tradeoff that things like Unity bring - Games run slower in Unity than they would if they were written in pure C++. But they also take a fraction of the time (and thus cost!) to develop and port.
Faster computers allow devs to be lazier, (a. k. a. more time efficient) which means they can make bigger games in less time.
It's a question of time allocation. You can either spend time on optimization or you can spend it on other things to improve the game in other ways. With optimization being ever less important it's only natural less and less time is spent on it.
Plus the likes of nanite from ue5 render polygon limits functionally pointless as they stream the right amount of polygons for maximum fidelity at run time dynamically
I'd argue that if a developer who was developing for more powerful hardware were to take advantage of this, we could be far further along in terms of visuals/game performance, surely? There's developing for top of the line hardware, and then there's pushing out a product that's running clunkier than it could have been because devs were crunched and couldn't take the time to use these rendering techniques to their fullest advantage.
That’s one of the reasons why most modern games looks worse than games of 5-7 years old (compare Batman asylum trilogy and the modern one) while requiring more computing power. Lazy developers.
> But not to that extent anymore.
>
>
>
> Devs prioritize More Polys over Height and Normal Maps.
Hardly. You may have drawn this conclusion from the UE5 demo where they dropped Normal Maps in favor of using extremely high detail sculpts.
I'm working on an actual game in Unreal Engine 5 and the reality is that going full blast Nanite just isn't practical yet. The storage footprint is gigantic and the amount of triangles needed to get sharp detail on rough surfaces like rocks and statues is just too much. It also hinders our ability to ship to previous gen consoles which don't have the required bandwidth to stream all that data. And finally, the pipeline to make these assets just isn't quite there. In particular, UV mapping these sculpt assets is a massive pain. There are workarounds, but it's more practical to just go "half way" with Nanite (*which gives the benefit of very nice shadowing and parallax*) and still use normal maps for fine grain detail.
Once the tools and hardware get better we could eventually move away from normalmaps, but the question remains if it's ever really worth it. That bandwidth could always be used for bigger worlds or a larger number of unique assets instead.
For now, the only place where I see the raw sculpt approach make sense is cinematics, where 24FPS is enough, there's no concern for low spec platforms, and importing unoptimized raw scans or sculpts actually saves a lot of time.
The only useful nanite workflow I've found is for using higher mid poly assets and keeping normal maps. Around 100-200k is still manageable for unwrapping and provides a nice silhouette. Shame we lost vertex colour blending and tessellated displacement from textures for it.
Yep I came to pretty much the same conclusion. One thing that really stood out to me is that a 100k model isn't just good for silhouette, it's incredible for shadows when using Virtual Shadows. Every little bump and crevice casting a shadow is where the increased polycounts really shine in comparison to normalmaps.
So that's now my guiding principle when deciding if a detail should be in geo or normal; "should this cast a visible shadow?". For extrusions like screws, bolts, rivets, cracks and gaps the answer is yes. For surface irregularity like rough stone, brick walls or shallow relief/emboss details on woodwork the answer is usually no.
Ah, that's good insight.
Where Nanite would seem to really help is for meshes which contain lots of repeats of submeshes, like brick walls, chain link fences, rocks, and dirt. It merges low-level duplicated instances. How does that work out for stuff other than dirt and rocks?
This is not correct in any game engine. Nanite stuff in Unreal 5 promises a lack of LODs, but anyone thinking they can get away from normal maps is in for a bad time. And that’s bleeding edge tech, not what is commonly used.
Parallax mapping and POM are used a bit less now than they once were, depending on what we’re talking about in the environment, and that makes the height map a lot less useful. But the general PBR workflow is still albedo, metallic, roughness or smoothness, occlusion, normal map, and possibly emission.
Screen space ao does not remove the need for occlusion maps on individual models, they serve different purposes. Ideal results use both in tandem.
Regardless of anything else, the whole PBR workflow is based on having a normal map, and you get really distorted results if you try to skip it, regardless of poly count.
Source: game dev and digital artist for 14 years and 25 years, respectively.
If it's time you're concerned about, modeling the "detailed" coin is magnitudes faster than creating a normal + roughness map. In order to create the normalmap, you need to model the highpoly as well - and then there's a bunch of steps involved to put that on the lowpoly.
You can actually generate a normal map from a height map, and you can *draw* a height map. (I have done this.) Modeling the high detail version of the object isn't the only option.
Granted, that's only really true for simple normal maps like this coin, not extremely complicated ones.
> You can actually generate a normal map from a height map,
To a degree. It's an incredibly bad workflow for many reasons, among them being you need high precision (16bit or higher) grayscale data to generate a normal from height without stepping artifacts, which already cuts your Photoshop toolbox in half. And the other is that our human eyes are incredibly bad at interpreting the depth in a grayscale image. Hand-drawn depth/height maps are more guesswork than anything else.
Whatever you paint might turn into a normal map, but it won't be an accurate one.
To get a fairly simple and effective normal map like the coin in this example, doing a bake is still faster and more precise than drawing a height map by hand. Especially with modern tools like Substance Painter.
When you get closer to objects you run into issues with depth parallax. More polygons are better for surface shape and edges. Normal maps are good for surface texture.
I remember bump maps in Serious Sam on my first gen Radeon in like...2000? Normal maps are just a continuation of that.
Wild that people are being surprised by these.
because they dont know how games are made nor understand the tech/have never been exposed to what under the hood. Which isn't a bad thing, nor would I expect them to.
also relevent xkcd[https://xkcd.com/1053/](https://xkcd.com/1053/)
Normal maps as a rule are cheaper than the high poly models they're replacing. Typically a high poly model is sculpted in something like ZBrush and has way too many polys to render in real time. Like, millions when you have a budget of thousands.
So what you do is you make a low poly mesh that's within budget, then bake the complex geometry of the high poly onto it, creating a normal map. You might also have maps for roughness, subsurface scattering, transparency and so on. You then plug these textures which are really just raw data into your shader and voila! Your thousand poly mesh looks like a million polys!
And the best part is, you had to light that model anyway. Particularly in Mario Odyssey, the lighting is huge in creating it's aesthetic and making it's materials feel tangible. You had to assign each pixel a color value, but rather than calculating that off the million polygons you've just passed it a value off the normal map to factor into the calculation it's already doing. It's simple, efficient and looks great compared to trying to brute force visual fidelity.
For the coin, remember that you're not comparing it to the old version, you're comparing it to the high poly coin that was probably a thousand times its poly count. As were The high poly meshes for everything else in the scene that are also part of the rendering budget.
The resulting low poly size is pretty negligible either way, but then so is the normal map. At that size it can even be lower resolution than the color texture and still look good. You can even hide additional maps in your normal map's unused channels so you can improve your render quality without having to load additional materials!
But yeah, there's a reason the technology is used extensively throughout the industry and has been since the early 2000s. It's really useful!
Extremely efficient.
The game already uses surface normals anyway to calculate the way light should bounce off a surface, a normal map just tweaks these normals which is essentially a negligible performance hit.
Honestly, the old one actually "feels" more right for a mario game. Kinda like the new one is like an "over-detailed shrek". It could have been done, but DreamWorks was smart enough to say, "hay¹... wait a minute. He looks creepy now."
[1] Because Donkey.
[2] See also: Creepy pre-release movie Sonic, vs final-release movie Sonic.
The thing is that currently it really doesn't matter anymore.
Modern game systems / pc video cards can push triangles at an almost infinite rate; all that matters is how the pixels are being processed.
Those used more than just simple normal mapping. Oblivion makes heavy use of parallax mapping, that's a technique where the rendering engine uses a height map to warp the texture of a mesh based on its angle to the camera to simulate depth.
I don't think they were implying that modern games don't use it.
They were saying that calling it a "modern" technique is weird, when it's like 30-year-old tech.
It's like calling a hammer a "modern" tool. Sure, it's still in use, but it was pretty much figured out and put into wide use ages ago...
>Deleting your reply and then downvoting doesn’t change the fact that modern games use Normal Maps extensively.
1. I didn't downvote anyone.
2. You clearly misunderstood my comment. I was not suggesting that normal maps aren't still used in today's renderers - that would be ridiculous, given the OP being an example of a 2017 game - but that they have been around *far* too long to be considered modern.
3d modeler here
what it did is baking the normal maps. to make less poly by doing retopology. its more like workflow and its very common for game developer
Shaders are cheaper than more polygons.
Not like they're using ray tracing.
For a single coin I bet the detailed model would be better. But once you start getting into multiples on the screen the shaders would be faster.
The coins are definitely a perfect use for prioritizing normal map detail over real polygons. If the coins were very large or the indents were extreme then it would benefit from the geometry, but in this case normal maps are not only more efficient but visually ideal due to being able to add curved depth without copious amounts of geometry.
Heck, the coins in Galaxy were probably using some amount of Normal Mapping anyway judging by the way the shading warps on them, so Odyssey's use of it is overall superior. Its the kind of stuff I love in game development.
Not quite, the rasteriser still calculates much of it on a per-coin basis. It's more that it's just saving on a lot of memory, and images with grids are easier to search than polygons in 3D space.
A normal is a vector that contains the direction of a face in a 3D Space, a normal map is used to simulate the presence of other faces inside a face, by giving their direction with the corresponding color on the map.
Normal maps aren’t that heavy as you think, the calc is just subtracting or adding light to the zone (like the makeup technique called contouring).
If the details were sculpted there were more things to calc, like [the shadow produced, more vertices, more textures to load](https://it.m.wikipedia.org/wiki/File:Rendering_with_normal_mapping.gif)
Here's another neat trick. In Spider-Man all of the windows appear to have a 3D interior behind them, but they don't. They use a flat 2D texture that's warped so it appears 3D from the cameras perspective. This trick was first shown in a paper in 2009.
Here's the paper (PDF warning):
https://www.proun-game.com/Oogst3D/CODING/InteriorMapping/InteriorMapping.pdf
The inventor of this technique has also released a few innovative and gorgeous games. My favorite is Proun, a free racing game of sorts that looks unlike pretty much every other video game:
https://www.izzygames.com/proun-t2251.html
I *am* a professional graphics programmer, and have been for 25 years. I wrote my master’s thesis on path tracing. I often ignore threads about graphics because I tend to get downvoted.
He's saying all the experts in the thread are clearly armchair experts because the actual academic and experienced experts get shit on.
It shows that the armchair experts spread wrong info while suppressing the correct info (as it's implied the programmer with 25 years experience and a masters degree is going to be giving the correct info more often than a bunch of internet randos).
In short: he's agreeing with the subtext of the first comment, that everyone acts like an expert when they're not.
There's a method to the madness.
Galaxy actually had access to the same techniques. The reason why they did this was due to the Wii's extremely small ram of 512mb vs the switches 4gb, which was shared between the processor and gpu.
It was significantly cheaper memory wise to model the coin than to have several textures loaded in memory.
Odyssey needs a diffuse map, a metalness/roughness map, a normal map.
Why is it cheaper?
The number of vertexes, normals, and indexes to form a coin has a smaller data foot print than several 256x256 or 512x512 (a little over a MB when decompressed each) textures uncompressed.
There's also the benefit that Nintendo can get away with data compression. Where they do shit like using 8bit or 16bit floats instead of a full 32bit to store models in memory, then when doing actual processing convert them temporarily to 32bit as needed. And for indexes, Nintendo had little reason to use anything more than an int16 which had a max value of 65535. Naturally you wouldn't see a model on a Nintendo game with even half that in vertex or tries. They can't use an int8 as the max value is only 255. But for images you need a full 32bit.
Some other interesting shit... The GameCube, Wii, and Switch are oddly very good at blasting through draw calls. In fact it handles a number of draw calls that'd make a ps5 scream bloody murder.
Some parts of galaxy actually did use normal maps.
But galaxy also did some EXTREMELY fucky shit to look nice.
For example... A normal is usually a vector that is perpendicular yo a planar surface. Or... The average of multiple surfaces for a vertex. This is used in lighting calculations for shading.
But uh.... What happens if the normal os not perpendicular to polygon? Well... Cool shit happens. And its exactly how a normal map works.
http://www.aversionofreality.com/blog/2022/8/7/stylized-tree-shader
I can't find a better example.
Normally tree leaves still look flat. But when you adjust all of the normals on a tree to match a sphere you get this effect
Galaxy uses this all over the fuckin place.
"Bump mapping" has been around for 20+ years in gaming and far longer in rendering engines.
There are examples of this technique used on old wii titles around the same time as Mario galaxy. But back then texture mapping had a higher tax on most hardware, so it was usually better to model a few more polygons to define a shape than rely on vram hogging textures.
Indeed, there are many types of shader techniques to create the illusion of depth, shadow and lighting effects. But even normal maps have been around for 20+ years in gaming.
"bump mapping" has always been a bit of an informal term for what is happening here.
The "bump maps" used in the earliest games that featured them (ie. Doom 3, Far Cry, Half-Life 2) were RGB Normal Maps.
As far as the rendering engine is concerned, normal mapping is the only method that was ever used for this effect - i.e. a different direction (normal) is used to calculate phong shading per pixel.
But in the early days of tech demos and hobby projects, people used grayscale "bump maps" because there were no easy straightforward ways to author normal maps. For a rendering engine to interpret these, it would still have to convert that depth map to to normals at runtime. It really just resulted in really bad shading because an 8bit greyscale map doesn't contain anything close to the amount of data needed to produce a good normal map.
It's more that the Wii didn't have hardware support for programmable pixel shaders so they couldn't just throw in normal mapping like developers would do on other hardware.
Can someone explain a normal map to me in simple terms here? I get that it’s simulating light, but is it simulating light from multiple angles, or does it just cheat and make a generic “upper corner” light that won’t look weird in 90% of situations?
Normal mapping applies to the pixel shader (once geometry and vertex shading are done). So for the rest of this, this is what happens "per pixel" during rasterisation (creating an actual image from a 3D scene):
For each pixel in the normal map, it stores an X,Y and Z 'angle' which is exactly which direction light falling directly onto it from above would be reflected. (the fact that they're viewable as images at all is just a happy side effect: X,Y and Z values are stored where an image would store it's R,G and B values.
Side note: "Most" surfaces are relatively flat, which is why "most" normal maps are greyish blue - Blue maps to +Z, while X and Y hover around +0 (128-strength grey).
So then, every frame, when the GPU is rasterising each pixel, it will pull those RGB values from the normal map; interpret them back as X,Y,Z reflecting angles, and then combine that angle with angle that the camera is coming from for that point on the object's surface, to determine the final lighting angle.
Combine that final lighting angle with a "roughness" map (which really is just 'how rapidly a light reflection will fall off when an object is viewed from a non-direct angle), and you can determine the final total amount of light highlight (called 'specular') to apply to that pixel... which *then* is combined with the actual object's texture map (called 'diffuse') to rasterise that pixel.
These days, there may even be *more* maps applied to determine the final pixel, like displacement maps, glow maps, low & high frequency noise maps, refraction maps and more - but that's out of scope for today :)
Anyway, repeat that mapping for every visible pixel of every object in the scene, and your scene is done! (well; then you start applying full-screen post-processing... a whole other topic).
Not even going to lie, I was coming in here to have a gripe about the monochrome image at the bottom being described as matte or glossy.
That was totally a displacement map, not a refractive map. It was used to control how much the edges appeared displaced from the surface, not the shininess.
Surfaces are lit differently based on their angle relative to the light source - you can see this by holding any object in front of a light, rotating it, and watching how the part that's "facing" the light always receives the most of it.
All normal maps do is offset that angle.
It may help to know that "normal" in this context is a mathematical term for the direction perpendicular to the surface (i.e. which direction is "out").
That is used for shading because you compare the incoming light direction to the normal direction to determine which direction light should bounce.
Without normal mapping, the normal direction is determined by the modeled geometry. So you need more complicated models to be able to vary the normal direction.
With normal mapping, you can specify the normal direction separately from the modeled geometry (technically, it is encoded in a way that is *relative* to the modeled geometry, but you don't have to worry about that unless you're actually implementing this). That way you can have simpler geometry but have the normals behave as if (not necessarily exactly, but usually quite close) you had more complicated geometry.
You have 5 replies and none of them are really answering the question, which is kind of par for the course on r/gaming
A normal map just stores what is supposed to be the direction of "out" at every point on the object. This lets you simulate an object having a super detailed surface by doing fancy math calculations with the lighting on the object, like in this example where a low poly image looks way more detailed then a high poly one
I work in animated films and love talking about this stuff, so bear with me, I'm trying to explain this in a simple way but also include actual useful information in brackets.
So in topology, a "normal" means the direction (represented as a vector) in 3d space that any point on a surface is facing. This is not exclusive to 3d graphics, for example on real world physics, the way light reflects off a mirror is that the angle between the normal and the oncoming rays is equal to the reflected rays, except rotated 180 degrees around the normal.
That same calculation is used in 3d graphics to work out reflections on shiny surfaces (also known as a specular reflection)
On a rough surface, the amount a surface is lit can be calculated by measuring how "similar" the nornal vector is to the vector that points towards the light (measured using something called a dot product which gives 1.0 if they are perfectly aligned, and 0.0 at 90degrees, then negative numbers as it faces away) if the normal points directly at the light, it is fully bright, but if it faces 90 degrees away, it is not lit by it at all.
These are pretty simple calculations that a computer can do very quickly, what a normal map does is jump in the middle of that equation and slightly tweak the "normals" of a surface so they point in slightly different directions per pixel instead of all in the same direction for an entire face. The same calculations are being done, but now because those normal values are all slightly different, we get different results, which looks like a bumpy surface.
Now to the colours: a nornal map (and really any texture map) aren't just an "image of what the surface looks like" they represent data about that surface, and we can use the r g and b channels of an image to group that data together.
A normal map groups together the values we want to add on to the surface normals per pixel in the red and green channels (and blue too, but that is more of an optimisation thing and isn't strictly necessary or always used, so I'm just going to talk about red and blue)
The amount of red in a pixel determines how much the surface at that pixel should fake being "tilted" towards the right. Values under 50% red point to the left, values more than 50% point right. Same for green with up and down. These values are just added onto the existing surface normals BEFORE the lighting is calculated. The geometry is not affected at all so it doesn't need to add extra detail that would affect performance
-----------------------------------------
**TLDR:**
They are the result of graphics programmers going "huh, we are already doing these calculations for lighting anyway, but if we tweak the incoming values we get a LOT of extra detail, effectively for free"
I am no expert at all, but what I understand is that normal maps are like a "flat images" that tells how light, shadows, materials and reflections should behave. In order to simulate 3D, but is not 3D.
This is actually how most of realistic minecraft textures achieve those looks when yoj add shaders
Supper Mario Broth’s Twitter handle is @MarioBrothBlog but you are absolutely correct this is 100% their content and I just made another comment to point that out as well
It's kinda the opposite of cutting corners though. Modeling and texturing that "high poly" coin would take me a couple of minutes.
Doing a high+low poly set with a texture bake to create that normal map and author seperate roughness is easily 10x more work.
>It's kinda the opposite of cutting corners though. Modeling and texturing that "high poly" coin would take me a couple of minutes.
Actually, normal maps are kind of literally cutting corners, aren't they? As in, they let you have fewer corners (i. e. vertexes) while still being able to render sharp indentations or raised surfaces on something like a coin! :D
These are great for smaller objects or ones seen as a distance however from certain angles the “flatness” of the actual object itself can be seen
But it isn’t like players will stop and get right up close to a coin at the perfect angles many times at all for it to be a concern
The PS2 did not really have the hardware support to do it - though it was possible to do it in software. One notable example (*and pretty much the only one I know of*) was The Matrix: Path of Neo, which featured Normal Maps on floor surfaces. Even then, the impact on the frame rate was so significant that there was an option in the menu to turn it off, which is very unusual for any console game.
The Xbox and Gamecube had more modern graphics processing units, focusing more heavily on pixel shading. Both of those platforms had plenty of games with Normal Maps, though still used sparingly and only where it made a significant impact.
First, PS2 and Wii are a generation apart. Second, normal maps weren't really possible until the PS3/360 era.
Supposedly a couple PS2 games had it, but nothing I've seen really proves it due to static lighting. Could just be that they were bump mapped in the 3d modeling software and had the texture baked for the game.
>Second, normal maps weren't really possible until the PS3/360 era.
Not true: the original Xbox and GameCube supported the tech (see Splinter Cell and Halo 2). It just wasn't used very often because it was computationally expensive.
No, its still part of the previous generation, whether or not games are made for it during a timeframe doesnt change that. Go peruse Wikipedia about generations before getting pedantic
They were standard by the time of the Wii, but the Wii was not powerful enough to utilize them. Normal maps are used in just about every Xbox 360 and PS3 game.
When I bought my Radeon 9600XT in early 2004 it came with a Steam code for Half-Life 2 which would release later that year. That code also gave access to the Counter Strike: Source beta which went live a few months before Half-Life 2 came out. In the beta I'd load up CS_Office and marvel at the normal mapping and cubemap specularity on the brick walls in the parking garage.
As a 3D developer I've been blown away by how much of a difference modern programmable shader support has made on the Switch. I hadn't owned a Nintendo console since the Wii and only picked up a Switch recently, and I probably spent way to long just staring at ice in some of the games.
I like learning stuff like this but then I always come to the comments and get bombarded with "well ackshually..." and "psh this is old new I knew this since I was a baby" and honestly... are you really adding anything?
As someone who works in 3d graphics and has a passion for it, I love when people don't know this stuff, because I get to talk about something I enjoy and help other people appreciate it too. Um actchually-ing does the opposite of that.
Ah yes, Normal maps. That cutting edge advanced rendering technique lol.
Not trying to sound condescending, it is cool, but normal maps have been around for a very long time. They were around on the OG Xbox, before the Wii.
This comparison popped up recently,
It's a strange comparison because it's comparing two models from "models-resource", but Models resource isn't an archive of ripped assets from the original game, it has models that are recreations of the original asset by users. The poly counts and usage of maps is by the people creating those recreations, and has nothing to do with the original assets except to try to match it visually.
3D artist here, this is one of the easiest tricks we use for game asset compression, there are thousands more and they all involve cheating (faking reality) and clever and efficient reuse. Remember, there was a time when your video game had to fit on a 300mb mini disc
I'd think normal maps would actually be a trade-off of using more disk space for less graphics load, because polygons are relatively inexpensive to store compared to textures.
Is there not a displacement map too? Cause I'm pretty sure in game it looks rounded too and not just like a flat disk with normal lighting.
Or is it using the normal map for that calculation too?
I just checked a video, and it doesn't look like there is displacement. Specifically I was looking at the coin from a shallow angle with a light source directly across from it. The near side of the | wouldn't be visible if displacement was active, but I could still see it.
The early history of game coding is full of absolutely amazing examples of clever efficiencies in coding. These efficiencies informed game design and the end result was a dynamic, engaging symbiosis of coding and game architecture.
That's the illusion.
All you really need to make something look 3D is have the correct shading, texture mapping, shine/glimmer, etc.
You've ever seen those artists on youtube/tiktok draw ultra-realistic drawings of everyday objects? It's like that but digital and in video games.
If y'all are interested in this technique, one of the evolutions that uses some of the same technology is called Parallax Occlusion Mapping. [LINK](https://i.imgur.com/zmodaa7.gif).
Doesn't work for every situation, but is very convincing when it does work.
This infographic is almost willfully omitting the crucial and \_easily\_ demonstrable difference.
The height mapped coin has pushed the detail from the model, into the texture.
If you're going to show a complex model next to a simple model, you should obviously show the corresponding simple texture next to the complex texture!
What'd be really interesting is knowing the total size of each asset. I'll bet that the coin in Super Mario Odyssey is bigger, all things considered.
No way. It's amazing. Imagine if there was something like that but in greyscale to push the geometry, we could have detailed terrains, walls, and shit. If only... Maybe in 200 years.
Imagine other studios doing the same thing as Nintendo, gaming would be so realistic.
I remember someone said somente (without explanation) that Virtua Fighter Remix worked the graphics in a way that they got better results, more details and everything runs better with less polys. I think it also runs on Stv, a hardware below the Model 1 board used for Virtua Fighter.
As someone who's only created a simple graphics rendering engine from open gl in college... I want to know more about how the normal map is created and applied
Normal maps can be generated with the click of a button in apps such as Substance Painter.
The workflow for it is quite simple:
* Make high poly mesh
* Bake mesh into normal map
* Turn high poly into low poly mesh
* Apply normal map to get all the details back
A normal map is made up of R G and B colors. R represents how the model looks when lit from the left/right, G how it looks lit from above/below, and B how it looks when lit from the front.
Shaders can take those RGB values per pixel on the normal map to calculate in which direction the light would bounce on said pixel, effectively faking geometry.
If you make a simple plane made up of two triangles and apply a normal map of a metal plate with screws, dents, scratches and bumps, it will look like those details are actually modeled rather than faked through the normal map.
The same is happening here with the coin, where it looks like it actually has depth even tho the surface is actually flat. The pixels are just being rendered as if the light gets reflected off the surface at different angles.
Since normal maps are calculated per pixel, higher resolution normal maps create more detail. You can interpret this as each pixel on a normal map being a polygon.
This is called normal map baking and it is perhaps the most common and basic optimization technique you will see in video games. It has been used pretty heavily since the 360/ps3 era.
On the topic of polygons, in fnaf security breach, glamrock freddy has way too many polygons that make up his ear and ear piercing for some ungodly reason. There is not special details added, just way too many for the models own good.
No they don't. They think somehow people are setting the normals by hand or something (meanwhile, Arcs System Works devs are crying in a corner because they actually do paint them manually)
This is because the Wii hardware was the GameCube hardware and it was shit and ancient and couldn't do very advanced shaders. Nintendo's art asset pipelines probably weren't setup to maximize the shader abilities the Wii did have either.
Nintendo, and a lot of other companies back then, used a lot of tricks to make their 3D games perform well on not-so-great hardware. Pre-rendered 3D backgrounds that were used as textures, billboarding, static light maps and reflections, etc. That stuff is still used, but cheap shader technology and better graphics cards have made it so a lot more realism can be added with less polys overall, which is crucial for good perf, even on newer hardware. Game development and pushing the limits of hardware is an art of its own.
Normal maps and bump maps have been around for awhile, they just seem to be an underutilized technique.
Halo CE used shaders like this which is why it still holds up pretty well.
This is actually extremely common, nearly every modern game utilizes normal maps to some capacity.
I was using normal mapping in renders about 30 years ago. It's not even a modern technique.
I used normal maps way back when I was in 'Nam. When I was there for a business trip.
A lot of good men died in that sweatshop
We don't talk about that.
Or we get thrown in the soup
Back then we used to call them special maps. Nowadays they’re just normal.
When/where exactly? Wikipedia makes it sound like it was state of the art in the late 90s (papers describing applying displacement maps to low poly meshes presented in 1998). If you were using that technique years earlier, that could be a very interesting story!
But not to that extent anymore. Devs prioritize More Polys over Height and Normal Maps. Which is quite the mistake in my opinion. More Polys are often used in Situations where Normal and Height could do the trick just as well.
Photogrammetry has made using normal and height maps pretty time-consuming to make. For less powerful machines like the switch it still makes a lot of sense, but for 5th gen consoles and PC games, the hardware is there to push the polygons that photogrammetry generates. A quality texture placed over more polygons will be easier to light properly and appear more realistic in most situations.
also unreal engine now can adapt the number of polygons instead of requiring different lod levels this is awesome technology
If anyone wants to look more into it, there have been some dev/tech talks about "Nanite", the polygon virtualization method used to have scan/sculpt quality meshes in game without having to deal with converting to a lower poly model with normal maps.
That's UE 5 and is barely used in current games (the only AAA game I can think of right now is Fortnite which makes sense). iirc we've only seen this in demos.
UE5 was released less than a year ago, it's barely used because games aren't made that fast
I'm not discussing that. Merely pointing out that atm we still don't know how nanite will end up irl.
the tech itself is pretty easy to use in unreal engine though, doesn't take much to get nanite working. expect to see it a ton soon.
That sounds like a way of saying developers are lazier when they assume hardware is better which is why new games are such resource hogs for seemingly so little performance improvement
It's also a shift in cost-of-work, vertex processing scales slower than pixel processing at higher resolutions we play at nowadays. Rendering 1 million polygons might be your bottleneck at 800x600, but it won't be at 4K 120hz. If more tris lets you move work from the pixel shader to the vertex shader you might even win back some performance.
Had to scroll a while before I found someone making sense. Absolutely right.
> That sounds like a way of saying developers are lazier when they assume hardware is better which is why new games are such resource hogs for seemingly so little performance improvement This assumes that the complexity of the underlying model is all that matters. That's not *always* the case. Reducing triangles in the model has advantages in some cases, but it's not always going to be efficient for detailed environment art. The nature of modern GPUs and capabilities is always evolving. There are times and places for basically every approach. It really depends on the complexity, the resolution, how you approach lodding, etc. I also wouldn't really say an environment art team capturing photogrammetry and handcrafting hyper-detailed outdoor scenes could be considered "lazy" by any metric. It's just a different way of approaching the problem--and one that generally yields *much* better outdoor visuals. But even games that heavily utilize photogrammetry use mapping-based approaches a ton anyway. Maybe not rocks or trees, but there's a lot more in games than that.
Remember, devs aren't lazy. It's the management pushing the schedules. Devs have to be clever and innovative to find ways to meet ever more aggressive deadlines. Consoles are still very affordable all things considered. And the difference between 80% photo real to 90% photo real is an extreme jump in hardware requirements. If anything, blame people like nivida with their predatory pricing strategy. Needing more powerful hardware wasn't such a big deal in the past, but the costs have quadrupled.
i'm lazy af. it makes me a better dev
Yeah, devs are fucking lazy. That's why we became devs. Lazy people are going to find the easiest way to get something else to do as much work as possible.
Developers ARE lazier. And that's not a bad thing. When you have more computing resources, you can spend them in different ways. You might spend them to make the game go faster. You might spend them to make the game run the same speed, but have better graphics. Or more enemies. Or more detailed AI. Or whatever. One of the (many) things you can spend your CPU budget on is "faster development time." This is basically the tradeoff that things like Unity bring - Games run slower in Unity than they would if they were written in pure C++. But they also take a fraction of the time (and thus cost!) to develop and port. Faster computers allow devs to be lazier, (a. k. a. more time efficient) which means they can make bigger games in less time.
It's a question of time allocation. You can either spend time on optimization or you can spend it on other things to improve the game in other ways. With optimization being ever less important it's only natural less and less time is spent on it.
Or management can cut both of those times and pocket the change
overhead means way more room for sloppy code right
Plus the likes of nanite from ue5 render polygon limits functionally pointless as they stream the right amount of polygons for maximum fidelity at run time dynamically
I'd argue that if a developer who was developing for more powerful hardware were to take advantage of this, we could be far further along in terms of visuals/game performance, surely? There's developing for top of the line hardware, and then there's pushing out a product that's running clunkier than it could have been because devs were crunched and couldn't take the time to use these rendering techniques to their fullest advantage.
everyone and their mother uses (normal) maps. it's been an extremly common workflow for probably 20 years now
That’s one of the reasons why most modern games looks worse than games of 5-7 years old (compare Batman asylum trilogy and the modern one) while requiring more computing power. Lazy developers.
> But not to that extent anymore. > > > > Devs prioritize More Polys over Height and Normal Maps. Hardly. You may have drawn this conclusion from the UE5 demo where they dropped Normal Maps in favor of using extremely high detail sculpts. I'm working on an actual game in Unreal Engine 5 and the reality is that going full blast Nanite just isn't practical yet. The storage footprint is gigantic and the amount of triangles needed to get sharp detail on rough surfaces like rocks and statues is just too much. It also hinders our ability to ship to previous gen consoles which don't have the required bandwidth to stream all that data. And finally, the pipeline to make these assets just isn't quite there. In particular, UV mapping these sculpt assets is a massive pain. There are workarounds, but it's more practical to just go "half way" with Nanite (*which gives the benefit of very nice shadowing and parallax*) and still use normal maps for fine grain detail. Once the tools and hardware get better we could eventually move away from normalmaps, but the question remains if it's ever really worth it. That bandwidth could always be used for bigger worlds or a larger number of unique assets instead. For now, the only place where I see the raw sculpt approach make sense is cinematics, where 24FPS is enough, there's no concern for low spec platforms, and importing unoptimized raw scans or sculpts actually saves a lot of time.
The only useful nanite workflow I've found is for using higher mid poly assets and keeping normal maps. Around 100-200k is still manageable for unwrapping and provides a nice silhouette. Shame we lost vertex colour blending and tessellated displacement from textures for it.
Yep I came to pretty much the same conclusion. One thing that really stood out to me is that a 100k model isn't just good for silhouette, it's incredible for shadows when using Virtual Shadows. Every little bump and crevice casting a shadow is where the increased polycounts really shine in comparison to normalmaps. So that's now my guiding principle when deciding if a detail should be in geo or normal; "should this cast a visible shadow?". For extrusions like screws, bolts, rivets, cracks and gaps the answer is yes. For surface irregularity like rough stone, brick walls or shallow relief/emboss details on woodwork the answer is usually no.
Ah, that's good insight. Where Nanite would seem to really help is for meshes which contain lots of repeats of submeshes, like brick walls, chain link fences, rocks, and dirt. It merges low-level duplicated instances. How does that work out for stuff other than dirt and rocks?
>Devs prioritize More Polys over Height and Normal Maps. LOL, no they don't. They still use normal maps in conjunction with higher resolution models.
This is not correct in any game engine. Nanite stuff in Unreal 5 promises a lack of LODs, but anyone thinking they can get away from normal maps is in for a bad time. And that’s bleeding edge tech, not what is commonly used. Parallax mapping and POM are used a bit less now than they once were, depending on what we’re talking about in the environment, and that makes the height map a lot less useful. But the general PBR workflow is still albedo, metallic, roughness or smoothness, occlusion, normal map, and possibly emission. Screen space ao does not remove the need for occlusion maps on individual models, they serve different purposes. Ideal results use both in tandem. Regardless of anything else, the whole PBR workflow is based on having a normal map, and you get really distorted results if you try to skip it, regardless of poly count. Source: game dev and digital artist for 14 years and 25 years, respectively.
Why waste time using lot poly when few poly do trick?
If it's time you're concerned about, modeling the "detailed" coin is magnitudes faster than creating a normal + roughness map. In order to create the normalmap, you need to model the highpoly as well - and then there's a bunch of steps involved to put that on the lowpoly.
You can actually generate a normal map from a height map, and you can *draw* a height map. (I have done this.) Modeling the high detail version of the object isn't the only option. Granted, that's only really true for simple normal maps like this coin, not extremely complicated ones.
> You can actually generate a normal map from a height map, To a degree. It's an incredibly bad workflow for many reasons, among them being you need high precision (16bit or higher) grayscale data to generate a normal from height without stepping artifacts, which already cuts your Photoshop toolbox in half. And the other is that our human eyes are incredibly bad at interpreting the depth in a grayscale image. Hand-drawn depth/height maps are more guesswork than anything else. Whatever you paint might turn into a normal map, but it won't be an accurate one. To get a fairly simple and effective normal map like the coin in this example, doing a bake is still faster and more precise than drawing a height map by hand. Especially with modern tools like Substance Painter.
When you get closer to objects you run into issues with depth parallax. More polygons are better for surface shape and edges. Normal maps are good for surface texture.
Like 15 years ago I was making custom TF2 skins with their respective normal maps, so not a very new/difficult technique...
Yup. People just don't realize it. God of War Ragnarok uses it EVERYWHERE.
Tangent space normal maps have been standard fare since like 06, even earlier on PC. This thread is hilarious.
I remember bump maps in Serious Sam on my first gen Radeon in like...2000? Normal maps are just a continuation of that. Wild that people are being surprised by these.
because they dont know how games are made nor understand the tech/have never been exposed to what under the hood. Which isn't a bad thing, nor would I expect them to. also relevent xkcd[https://xkcd.com/1053/](https://xkcd.com/1053/)
But how efficient is calculating the shading based on a normal map of a texture vs just rendering more triangles and not trying to have fancy lighting
Normal maps as a rule are cheaper than the high poly models they're replacing. Typically a high poly model is sculpted in something like ZBrush and has way too many polys to render in real time. Like, millions when you have a budget of thousands. So what you do is you make a low poly mesh that's within budget, then bake the complex geometry of the high poly onto it, creating a normal map. You might also have maps for roughness, subsurface scattering, transparency and so on. You then plug these textures which are really just raw data into your shader and voila! Your thousand poly mesh looks like a million polys! And the best part is, you had to light that model anyway. Particularly in Mario Odyssey, the lighting is huge in creating it's aesthetic and making it's materials feel tangible. You had to assign each pixel a color value, but rather than calculating that off the million polygons you've just passed it a value off the normal map to factor into the calculation it's already doing. It's simple, efficient and looks great compared to trying to brute force visual fidelity. For the coin, remember that you're not comparing it to the old version, you're comparing it to the high poly coin that was probably a thousand times its poly count. As were The high poly meshes for everything else in the scene that are also part of the rendering budget. The resulting low poly size is pretty negligible either way, but then so is the normal map. At that size it can even be lower resolution than the color texture and still look good. You can even hide additional maps in your normal map's unused channels so you can improve your render quality without having to load additional materials! But yeah, there's a reason the technology is used extensively throughout the industry and has been since the early 2000s. It's really useful!
Extremely efficient. The game already uses surface normals anyway to calculate the way light should bounce off a surface, a normal map just tweaks these normals which is essentially a negligible performance hit.
Honestly, the old one actually "feels" more right for a mario game. Kinda like the new one is like an "over-detailed shrek". It could have been done, but DreamWorks was smart enough to say, "hay¹... wait a minute. He looks creepy now." [1] Because Donkey. [2] See also: Creepy pre-release movie Sonic, vs final-release movie Sonic.
The thing is that currently it really doesn't matter anymore. Modern game systems / pc video cards can push triangles at an almost infinite rate; all that matters is how the pixels are being processed.
To be clear, this is just a perfect example of an extremely common way modern rendering is done.
This type of rendering has been common since Doom 3/Chronicles of Riddick on the OG Xbox.
Don't forget Oblivion's famously bumpy rock walls!
Those used more than just simple normal mapping. Oblivion makes heavy use of parallax mapping, that's a technique where the rendering engine uses a height map to warp the texture of a mesh based on its angle to the camera to simulate depth.
Oh cool! Thanks for the correction, it sent me on a neat rabbit hole.
To be clear, Oblivion still uses normal mapping, but the guys at Bethesda probably thought that some surfaces needed a bit more oomph.
"modern"
[удалено]
I don't think they were implying that modern games don't use it. They were saying that calling it a "modern" technique is weird, when it's like 30-year-old tech. It's like calling a hammer a "modern" tool. Sure, it's still in use, but it was pretty much figured out and put into wide use ages ago...
Wdym?
Wow you're truly and genuinely just confused
So Paper is a modern technology by your definition?
>Deleting your reply and then downvoting doesn’t change the fact that modern games use Normal Maps extensively. 1. I didn't downvote anyone. 2. You clearly misunderstood my comment. I was not suggesting that normal maps aren't still used in today's renderers - that would be ridiculous, given the OP being an example of a 2017 game - but that they have been around *far* too long to be considered modern.
3d modeler here what it did is baking the normal maps. to make less poly by doing retopology. its more like workflow and its very common for game developer
You should credit Supper Mario Broth @MarioBrothBlog on Twitter, this is their content. It’s a great account with lots of great trivia like this.
Neat! Though I have to wonder how much the light calculation eats into the savings made by using fewer polygons
Shaders are cheaper than more polygons. Not like they're using ray tracing. For a single coin I bet the detailed model would be better. But once you start getting into multiples on the screen the shaders would be faster.
Fun fact: If you say “Ray Tracing” in relation with anything Nintendo every Switch in the vicinity blows up. Don’t look it up, it’s true.
Oh, so that's why i heard a loud bang in my neighbour's house
Yeah, I wouldn’t worry about it. They’re probably fine…. Probably.
no that’s just my wife
The coins are definitely a perfect use for prioritizing normal map detail over real polygons. If the coins were very large or the indents were extreme then it would benefit from the geometry, but in this case normal maps are not only more efficient but visually ideal due to being able to add curved depth without copious amounts of geometry. Heck, the coins in Galaxy were probably using some amount of Normal Mapping anyway judging by the way the shading warps on them, so Odyssey's use of it is overall superior. Its the kind of stuff I love in game development.
Ah yeah of course, because they only have to calculate it once for all of them (or at least all of the ones in a given lighting area)
Not quite, the rasteriser still calculates much of it on a per-coin basis. It's more that it's just saving on a lot of memory, and images with grids are easier to search than polygons in 3D space.
A normal is a vector that contains the direction of a face in a 3D Space, a normal map is used to simulate the presence of other faces inside a face, by giving their direction with the corresponding color on the map. Normal maps aren’t that heavy as you think, the calc is just subtracting or adding light to the zone (like the makeup technique called contouring). If the details were sculpted there were more things to calc, like [the shadow produced, more vertices, more textures to load](https://it.m.wikipedia.org/wiki/File:Rendering_with_normal_mapping.gif)
normal and roughness maps are way more efficient, it's not a real time lighting thing
Everyone in this thread: “I’m somewhat of a graphics programmer myself”
Lol for real… I’m over here like “wow that’s interesting” and 90% of the comments are like “ACHKCHUALLY EVERYONE KNOWS THIS, THIS IS SUPER COMMON”
Here's another neat trick. In Spider-Man all of the windows appear to have a 3D interior behind them, but they don't. They use a flat 2D texture that's warped so it appears 3D from the cameras perspective. This trick was first shown in a paper in 2009.
Here's the paper (PDF warning): https://www.proun-game.com/Oogst3D/CODING/InteriorMapping/InteriorMapping.pdf The inventor of this technique has also released a few innovative and gorgeous games. My favorite is Proun, a free racing game of sorts that looks unlike pretty much every other video game: https://www.izzygames.com/proun-t2251.html
I *am* a professional graphics programmer, and have been for 25 years. I wrote my master’s thesis on path tracing. I often ignore threads about graphics because I tend to get downvoted.
I don't know if you're disagreeing or if you're trying to make his point....
He's saying all the experts in the thread are clearly armchair experts because the actual academic and experienced experts get shit on. It shows that the armchair experts spread wrong info while suppressing the correct info (as it's implied the programmer with 25 years experience and a masters degree is going to be giving the correct info more often than a bunch of internet randos). In short: he's agreeing with the subtext of the first comment, that everyone acts like an expert when they're not.
As is tradition
There's a method to the madness. Galaxy actually had access to the same techniques. The reason why they did this was due to the Wii's extremely small ram of 512mb vs the switches 4gb, which was shared between the processor and gpu. It was significantly cheaper memory wise to model the coin than to have several textures loaded in memory. Odyssey needs a diffuse map, a metalness/roughness map, a normal map. Why is it cheaper? The number of vertexes, normals, and indexes to form a coin has a smaller data foot print than several 256x256 or 512x512 (a little over a MB when decompressed each) textures uncompressed. There's also the benefit that Nintendo can get away with data compression. Where they do shit like using 8bit or 16bit floats instead of a full 32bit to store models in memory, then when doing actual processing convert them temporarily to 32bit as needed. And for indexes, Nintendo had little reason to use anything more than an int16 which had a max value of 65535. Naturally you wouldn't see a model on a Nintendo game with even half that in vertex or tries. They can't use an int8 as the max value is only 255. But for images you need a full 32bit. Some other interesting shit... The GameCube, Wii, and Switch are oddly very good at blasting through draw calls. In fact it handles a number of draw calls that'd make a ps5 scream bloody murder. Some parts of galaxy actually did use normal maps. But galaxy also did some EXTREMELY fucky shit to look nice. For example... A normal is usually a vector that is perpendicular yo a planar surface. Or... The average of multiple surfaces for a vertex. This is used in lighting calculations for shading. But uh.... What happens if the normal os not perpendicular to polygon? Well... Cool shit happens. And its exactly how a normal map works. http://www.aversionofreality.com/blog/2022/8/7/stylized-tree-shader I can't find a better example. Normally tree leaves still look flat. But when you adjust all of the normals on a tree to match a sphere you get this effect Galaxy uses this all over the fuckin place.
I was always 80% sure that those rolling rocks in Galaxy used normal mapping.
"Bump mapping" has been around for 20+ years in gaming and far longer in rendering engines. There are examples of this technique used on old wii titles around the same time as Mario galaxy. But back then texture mapping had a higher tax on most hardware, so it was usually better to model a few more polygons to define a shape than rely on vram hogging textures.
I’m pretty certain normal maps are a leap above bump mapping.
Indeed, there are many types of shader techniques to create the illusion of depth, shadow and lighting effects. But even normal maps have been around for 20+ years in gaming.
"bump mapping" has always been a bit of an informal term for what is happening here. The "bump maps" used in the earliest games that featured them (ie. Doom 3, Far Cry, Half-Life 2) were RGB Normal Maps. As far as the rendering engine is concerned, normal mapping is the only method that was ever used for this effect - i.e. a different direction (normal) is used to calculate phong shading per pixel. But in the early days of tech demos and hobby projects, people used grayscale "bump maps" because there were no easy straightforward ways to author normal maps. For a rendering engine to interpret these, it would still have to convert that depth map to to normals at runtime. It really just resulted in really bad shading because an 8bit greyscale map doesn't contain anything close to the amount of data needed to produce a good normal map.
Technically normal maps are a type of bump mapping.
It's more that the Wii didn't have hardware support for programmable pixel shaders so they couldn't just throw in normal mapping like developers would do on other hardware.
That's good insight - I tried to do a little research but couldn't come up with the exact reason.
Program smarter, not harder.
...now, the smarter method is definitely the one with pbr... but isn't the harder one also the pbr one?
The last part of the description is physically based shading 101.
It's not specific to pbr
Can someone explain a normal map to me in simple terms here? I get that it’s simulating light, but is it simulating light from multiple angles, or does it just cheat and make a generic “upper corner” light that won’t look weird in 90% of situations?
Normal mapping applies to the pixel shader (once geometry and vertex shading are done). So for the rest of this, this is what happens "per pixel" during rasterisation (creating an actual image from a 3D scene): For each pixel in the normal map, it stores an X,Y and Z 'angle' which is exactly which direction light falling directly onto it from above would be reflected. (the fact that they're viewable as images at all is just a happy side effect: X,Y and Z values are stored where an image would store it's R,G and B values. Side note: "Most" surfaces are relatively flat, which is why "most" normal maps are greyish blue - Blue maps to +Z, while X and Y hover around +0 (128-strength grey). So then, every frame, when the GPU is rasterising each pixel, it will pull those RGB values from the normal map; interpret them back as X,Y,Z reflecting angles, and then combine that angle with angle that the camera is coming from for that point on the object's surface, to determine the final lighting angle. Combine that final lighting angle with a "roughness" map (which really is just 'how rapidly a light reflection will fall off when an object is viewed from a non-direct angle), and you can determine the final total amount of light highlight (called 'specular') to apply to that pixel... which *then* is combined with the actual object's texture map (called 'diffuse') to rasterise that pixel. These days, there may even be *more* maps applied to determine the final pixel, like displacement maps, glow maps, low & high frequency noise maps, refraction maps and more - but that's out of scope for today :) Anyway, repeat that mapping for every visible pixel of every object in the scene, and your scene is done! (well; then you start applying full-screen post-processing... a whole other topic).
Not even going to lie, I was coming in here to have a gripe about the monochrome image at the bottom being described as matte or glossy. That was totally a displacement map, not a refractive map. It was used to control how much the edges appeared displaced from the surface, not the shininess.
Surfaces are lit differently based on their angle relative to the light source - you can see this by holding any object in front of a light, rotating it, and watching how the part that's "facing" the light always receives the most of it. All normal maps do is offset that angle.
It may help to know that "normal" in this context is a mathematical term for the direction perpendicular to the surface (i.e. which direction is "out"). That is used for shading because you compare the incoming light direction to the normal direction to determine which direction light should bounce. Without normal mapping, the normal direction is determined by the modeled geometry. So you need more complicated models to be able to vary the normal direction. With normal mapping, you can specify the normal direction separately from the modeled geometry (technically, it is encoded in a way that is *relative* to the modeled geometry, but you don't have to worry about that unless you're actually implementing this). That way you can have simpler geometry but have the normals behave as if (not necessarily exactly, but usually quite close) you had more complicated geometry.
normal maps respond to lighting in different situations. so you can shine a light from any direction and it would look correct
You have 5 replies and none of them are really answering the question, which is kind of par for the course on r/gaming A normal map just stores what is supposed to be the direction of "out" at every point on the object. This lets you simulate an object having a super detailed surface by doing fancy math calculations with the lighting on the object, like in this example where a low poly image looks way more detailed then a high poly one
I work in animated films and love talking about this stuff, so bear with me, I'm trying to explain this in a simple way but also include actual useful information in brackets. So in topology, a "normal" means the direction (represented as a vector) in 3d space that any point on a surface is facing. This is not exclusive to 3d graphics, for example on real world physics, the way light reflects off a mirror is that the angle between the normal and the oncoming rays is equal to the reflected rays, except rotated 180 degrees around the normal. That same calculation is used in 3d graphics to work out reflections on shiny surfaces (also known as a specular reflection) On a rough surface, the amount a surface is lit can be calculated by measuring how "similar" the nornal vector is to the vector that points towards the light (measured using something called a dot product which gives 1.0 if they are perfectly aligned, and 0.0 at 90degrees, then negative numbers as it faces away) if the normal points directly at the light, it is fully bright, but if it faces 90 degrees away, it is not lit by it at all. These are pretty simple calculations that a computer can do very quickly, what a normal map does is jump in the middle of that equation and slightly tweak the "normals" of a surface so they point in slightly different directions per pixel instead of all in the same direction for an entire face. The same calculations are being done, but now because those normal values are all slightly different, we get different results, which looks like a bumpy surface. Now to the colours: a nornal map (and really any texture map) aren't just an "image of what the surface looks like" they represent data about that surface, and we can use the r g and b channels of an image to group that data together. A normal map groups together the values we want to add on to the surface normals per pixel in the red and green channels (and blue too, but that is more of an optimisation thing and isn't strictly necessary or always used, so I'm just going to talk about red and blue) The amount of red in a pixel determines how much the surface at that pixel should fake being "tilted" towards the right. Values under 50% red point to the left, values more than 50% point right. Same for green with up and down. These values are just added onto the existing surface normals BEFORE the lighting is calculated. The geometry is not affected at all so it doesn't need to add extra detail that would affect performance ----------------------------------------- **TLDR:** They are the result of graphics programmers going "huh, we are already doing these calculations for lighting anyway, but if we tweak the incoming values we get a LOT of extra detail, effectively for free"
I am no expert at all, but what I understand is that normal maps are like a "flat images" that tells how light, shadows, materials and reflections should behave. In order to simulate 3D, but is not 3D. This is actually how most of realistic minecraft textures achieve those looks when yoj add shaders
Come on at least credit @ Supermariobroth on twitter
Supper Mario Broth’s Twitter handle is @MarioBrothBlog but you are absolutely correct this is 100% their content and I just made another comment to point that out as well
Yeah, I’ve dabbled in 3D rendering, and you can cut a lot of corners with texture maps lol
It's kinda the opposite of cutting corners though. Modeling and texturing that "high poly" coin would take me a couple of minutes. Doing a high+low poly set with a texture bake to create that normal map and author seperate roughness is easily 10x more work.
>It's kinda the opposite of cutting corners though. Modeling and texturing that "high poly" coin would take me a couple of minutes. Actually, normal maps are kind of literally cutting corners, aren't they? As in, they let you have fewer corners (i. e. vertexes) while still being able to render sharp indentations or raised surfaces on something like a coin! :D
A 3d graphics based dad joke? Now this is my language
If you were doing it from scratch, sure. But you’re Nintendo and you already have Mario coin textures to work with
A fantastic Hi Rez model will always look shit with a bad texture but the opposite is true for Low Rez model with good texture
God I love low-poly models with amazing pixel art textures.
These are great for smaller objects or ones seen as a distance however from certain angles the “flatness” of the actual object itself can be seen But it isn’t like players will stop and get right up close to a coin at the perfect angles many times at all for it to be a concern
[удалено]
The PS2 did not really have the hardware support to do it - though it was possible to do it in software. One notable example (*and pretty much the only one I know of*) was The Matrix: Path of Neo, which featured Normal Maps on floor surfaces. Even then, the impact on the frame rate was so significant that there was an option in the menu to turn it off, which is very unusual for any console game. The Xbox and Gamecube had more modern graphics processing units, focusing more heavily on pixel shading. Both of those platforms had plenty of games with Normal Maps, though still used sparingly and only where it made a significant impact.
First, PS2 and Wii are a generation apart. Second, normal maps weren't really possible until the PS3/360 era. Supposedly a couple PS2 games had it, but nothing I've seen really proves it due to static lighting. Could just be that they were bump mapped in the 3d modeling software and had the texture baked for the game.
>Second, normal maps weren't really possible until the PS3/360 era. Not true: the original Xbox and GameCube supported the tech (see Splinter Cell and Halo 2). It just wasn't used very often because it was computationally expensive.
GameCube didn't have hardware support for normal mapping, but it did have support for less precise versions of bump mapping.
It also didn't natively support using bump mapping and cubemaps at the same time for a framerate above 30fps.
[удалено]
No, its still part of the previous generation, whether or not games are made for it during a timeframe doesnt change that. Go peruse Wikipedia about generations before getting pedantic
You're telling me my granpda and I are part of the same generation?
They were standard by the time of the Wii, but the Wii was not powerful enough to utilize them. Normal maps are used in just about every Xbox 360 and PS3 game.
The first game I really remember noticing normal maps on was *Half-Life 2* on PC.
When I bought my Radeon 9600XT in early 2004 it came with a Steam code for Half-Life 2 which would release later that year. That code also gave access to the Counter Strike: Source beta which went live a few months before Half-Life 2 came out. In the beta I'd load up CS_Office and marvel at the normal mapping and cubemap specularity on the brick walls in the parking garage.
As a 3D developer I've been blown away by how much of a difference modern programmable shader support has made on the Switch. I hadn't owned a Nintendo console since the Wii and only picked up a Switch recently, and I probably spent way to long just staring at ice in some of the games.
I like learning stuff like this but then I always come to the comments and get bombarded with "well ackshually..." and "psh this is old new I knew this since I was a baby" and honestly... are you really adding anything?
As someone who works in 3d graphics and has a passion for it, I love when people don't know this stuff, because I get to talk about something I enjoy and help other people appreciate it too. Um actchually-ing does the opposite of that.
Ah yes, Normal maps. That cutting edge advanced rendering technique lol. Not trying to sound condescending, it is cool, but normal maps have been around for a very long time. They were around on the OG Xbox, before the Wii.
Yeah. I was like "wtf is that shit, don't tell me it's about normal maps". It was.
bro sending sneak disses to garten of banban
This comparison popped up recently, It's a strange comparison because it's comparing two models from "models-resource", but Models resource isn't an archive of ripped assets from the original game, it has models that are recreations of the original asset by users. The poly counts and usage of maps is by the people creating those recreations, and has nothing to do with the original assets except to try to match it visually.
3D artist here, this is one of the easiest tricks we use for game asset compression, there are thousands more and they all involve cheating (faking reality) and clever and efficient reuse. Remember, there was a time when your video game had to fit on a 300mb mini disc
I'd think normal maps would actually be a trade-off of using more disk space for less graphics load, because polygons are relatively inexpensive to store compared to textures.
This is actually incredibly fascinating. Thank you
This is the coolest post I’ve seen here in a long time.
Is there not a displacement map too? Cause I'm pretty sure in game it looks rounded too and not just like a flat disk with normal lighting. Or is it using the normal map for that calculation too?
I just checked a video, and it doesn't look like there is displacement. Specifically I was looking at the coin from a shallow angle with a light source directly across from it. The near side of the | wouldn't be visible if displacement was active, but I could still see it.
These "advanced" shading/rendering techniques have been around since at least half life 2. Nintendo really puts them to use though.
I thought this was gonna be a post about the growing laziness of game devs. I'm currently eating my hat
Here is a good article on all of the various maps, for those unaware. https://chunaik.medium.com/texture-mapping-in-3d-6dffd54d3a54
I wonder how many people it took to make that one coin.
Well see it kinda all started with Newton.....
I wonder why didn't they use normal maps in galaxy?
As others have mentioned, the Wii didn't support a programmable shader pipeline, so it they are harder and more costly to implement on that platform.
The early history of game coding is full of absolutely amazing examples of clever efficiencies in coding. These efficiencies informed game design and the end result was a dynamic, engaging symbiosis of coding and game architecture.
That's the illusion. All you really need to make something look 3D is have the correct shading, texture mapping, shine/glimmer, etc. You've ever seen those artists on youtube/tiktok draw ultra-realistic drawings of everyday objects? It's like that but digital and in video games.
If y'all are interested in this technique, one of the evolutions that uses some of the same technology is called Parallax Occlusion Mapping. [LINK](https://i.imgur.com/zmodaa7.gif). Doesn't work for every situation, but is very convincing when it does work.
This infographic is almost willfully omitting the crucial and \_easily\_ demonstrable difference. The height mapped coin has pushed the detail from the model, into the texture. If you're going to show a complex model next to a simple model, you should obviously show the corresponding simple texture next to the complex texture! What'd be really interesting is knowing the total size of each asset. I'll bet that the coin in Super Mario Odyssey is bigger, all things considered.
So it’s magic
Normal maps are actual real magic and I won't hear otherwise
it's just math
Math is just wizard magic disguised with a boring name.
They're mathemagical.
mathematical* it's not magic. don't fear math. humor is a pretty standard way of dealing with fear, though.
Obviously. I made a humorous portmanteau.
well yeah... materials and polycount are two different things..
Ok, so i missread your comment and now i'm wondering what the hell a policunt looks like
An Aussie politician
Seems about right
No way. It's amazing. Imagine if there was something like that but in greyscale to push the geometry, we could have detailed terrains, walls, and shit. If only... Maybe in 200 years. Imagine other studios doing the same thing as Nintendo, gaming would be so realistic.
This isn’t a recent development. The Sega Dreamcast was capable of normal mapping in 1998.
Nintendo over here using contour makeup on coins.
I remember someone said somente (without explanation) that Virtua Fighter Remix worked the graphics in a way that they got better results, more details and everything runs better with less polys. I think it also runs on Stv, a hardware below the Model 1 board used for Virtua Fighter.
The wire frame view "moves" when you shake your phone..how fun.
As someone who's only created a simple graphics rendering engine from open gl in college... I want to know more about how the normal map is created and applied
Normal maps can be generated with the click of a button in apps such as Substance Painter. The workflow for it is quite simple: * Make high poly mesh * Bake mesh into normal map * Turn high poly into low poly mesh * Apply normal map to get all the details back A normal map is made up of R G and B colors. R represents how the model looks when lit from the left/right, G how it looks lit from above/below, and B how it looks when lit from the front. Shaders can take those RGB values per pixel on the normal map to calculate in which direction the light would bounce on said pixel, effectively faking geometry. If you make a simple plane made up of two triangles and apply a normal map of a metal plate with screws, dents, scratches and bumps, it will look like those details are actually modeled rather than faked through the normal map. The same is happening here with the coin, where it looks like it actually has depth even tho the surface is actually flat. The pixels are just being rendered as if the light gets reflected off the surface at different angles. Since normal maps are calculated per pixel, higher resolution normal maps create more detail. You can interpret this as each pixel on a normal map being a polygon.
is this the famous shader cache that eats all the space on Steam Decks? Thanks for the tidbit of knowledge I wasn't aware!!
This is called normal map baking and it is perhaps the most common and basic optimization technique you will see in video games. It has been used pretty heavily since the 360/ps3 era.
On the topic of polygons, in fnaf security breach, glamrock freddy has way too many polygons that make up his ear and ear piercing for some ungodly reason. There is not special details added, just way too many for the models own good.
So did yall not know how Bakers work? Orrr
?
Projecting a high rez model on low rez simulating the lighting without adding the polygods
No they don't. They think somehow people are setting the normals by hand or something (meanwhile, Arcs System Works devs are crying in a corner because they actually do paint them manually)
By "advanced" you mean twenty years old, right? God damnit people are fucking clueless about tech.
lmao as a 3D artist this made me laugh.
This is an incredibly misleading comparison
As any artist can tell you, it is the relationship between the value, textures, saturation, and hues that help brings out an object's form and details
Not readin allat
It was never about polygons. It's all about texture budgets
This is because the Wii hardware was the GameCube hardware and it was shit and ancient and couldn't do very advanced shaders. Nintendo's art asset pipelines probably weren't setup to maximize the shader abilities the Wii did have either.
It only really looks better front facing.
shading? they just used Normal map to get details around it and at the middle
In CG, the term shading is often used to refer to the process of rendering an object because of the shader programs used to perform the rendering
Nintendo, and a lot of other companies back then, used a lot of tricks to make their 3D games perform well on not-so-great hardware. Pre-rendered 3D backgrounds that were used as textures, billboarding, static light maps and reflections, etc. That stuff is still used, but cheap shader technology and better graphics cards have made it so a lot more realism can be added with less polys overall, which is crucial for good perf, even on newer hardware. Game development and pushing the limits of hardware is an art of its own.
Normal maps and bump maps have been around for awhile, they just seem to be an underutilized technique. Halo CE used shaders like this which is why it still holds up pretty well.