To come in
All computer secrets for beginners and professionals
  • Access error "400 Bad request": what is it and how to fix the situation?
  • Apps on iPhone stuck in "waiting"?
  • Attaching photos to a text message on iPhone, iPad, iPod touch
  • No network on iPhone - solutions to the problem
  • Bolt PDF Printer - a free utility for printing PDF documents How to print part of a sheet from PDF
  • How blocking works on a Mac and what to do if your Mac is blocked
  • What is render? Rendering, methods and programs. What is What does render mean?

    What is render?  Rendering, methods and programs.  What is What does render mean?

    Visualization is an extremely important section in computer graphics; it can also be called rendering - the process of obtaining a picture from a model through computer programs. It should be said that everything related to this topic is very fleeting and quickly becomes obsolete, because technologies do not stand still, they develop by leaps and bounds - outdated versions are immediately replaced by newer ones with better characteristics. The fundamentals that rely on the principle of ray tracing remain more or less established.

    This principle lies in the fact that rays are sent to objects in a 3D scene, which do not stop their propagation when they hit an object, but are reflected and fly further until they are completely absorbed. Thanks to this method, the image turns out to be very realistic, but, of course, it takes a lot of time. Using special formulas, the renderer emits a beam and traces its entire path, then writes it to a special cache file. There is also a global illumination setting that monitors the gradual inclusion of secondary bounces of this very beam. There are a huge number of such settings, since there is no single formula that is responsible for all parameters at once.

    When starting work, of course, you should choose the render that you like best. Their list is large, you can stop at Renderman from Pixar, but if you want to use it under Maya, then you should install the version of Renderman for Maya, written specifically for it or RenderManArtistTools. VRay is relatively easy to learn and has a good level of visualization quality. You can also use visualizers such as fryender and mental ray, which have their own advantages, or YafaRay, a completely free program. In general, the assortment is large, the main thing is to choose renderings separately from 3D packages, and not use those installed there by default. This way your image will be higher quality and more realistic.

    After downloading/purchasing the desired render, go to the website of the official assistant, explainer, helper (whatever you prefer to call it) and look, study the descriptions of all the settings. You can often find video tutorials, but the main thing here is not to overdo it. Experts advise not to overwhelm yourself with information. Of course, you want to know as much as possible, but it’s better to do it step by step, so to speak, putting everything in order, then your memorability will be better. And the most important thing is to understand that the visualization process is complex - including the development of high-quality materials, lighting and setting the rendering characteristics themselves. Therefore, in order to start working with the program itself, you need to understand at least the basics of creating a realistic image; regarding setting the light, you can seek advice from a photographer, because in our 3D image we show how a camera, and not a person, sees the world. We will then have to evaluate how skillfully the work was done and how much it corresponds to reality.

    Editor's Choice

    What is rendering and what features does this process have?

    Computer graphics- an important part of almost any sphere and environment with which a person interacts.

    All objects of the urban environment, the design of premises, household items, and at the stage of their design and implementation were carried out in the form of a three-dimensional computer model, which is drawn by artists in special programs.

    Drawing a model occurs in several stages, one of the final stages is rendering - what it is and how it is carried out is described in this material.

    Definition

    Rendering (or as it is also called, rendering) is one of the final processes in processing and drawing a certain three-dimensional three-dimensional computer model.

    Technically, it is the process of “gluing” or matching, creating a three-dimensional image from a number of two-dimensional images. Depending on the quality or detail, there can be either just a few or a lot of two-dimensional images.

    Also, sometimes at this stage in the process of “assembling” the model, some three-dimensional elements can be used.

    This process is quite complex and lengthy. It is based on various calculations performed both by the computer and (to a lesser extent) by the artist himself.

    Important! Programs that allow you to implement it are designed to work with three-dimensional graphics, which means they are quite powerful and require significant hardware resources and a significant amount of RAM.

    They place a significant load on the computer's hardware.

    Scope of application

    In what areas is this concept applicable and is it necessary to carry out such a process?

    This process is necessary in all areas that involve the creation of three-dimensional three-dimensional models, and computer graphics in general, and these are almost all areas of life with which a modern person can interact.

    Computer-aided design is used in:

    • Design of buildings and structures;
    • Landscape architecture;
    • Urban environment design;
    • Interior design;
    • Almost every material thing produced was once a computer model;
    • Video games;
    • Film production, etc.

    At the same time, this process, in its essence, is final.

    It may be the last or penultimate one when designing a model.

    Note that rendering is often called not the process of creating a model itself, but its result - a finished three-dimensional computer model.

    Technology

    This procedure can be called one of the most difficult when working with three-dimensional images and objects in computer graphics.

    This stage is accompanied by complex technical calculations performed by the program engine - mathematical data about the scene and object at this stage is translated into the final two-dimensional image.

    That is, color, light and other data about a three-dimensional model are processed pixel-by-pixel in such a way that it can be displayed as a two-dimensional picture on a computer screen.

    That is, through a series of calculations, the system determines exactly how each pixel of each two-dimensional image should be colored so that, as a result, it looks like a three-dimensional model on the user's computer screen.

    Kinds

    Depending on the characteristics of the technology and work, there are two main types of such a process - real-time rendering and preliminary rendering.

    In real time

    This type is widespread, mainly in computer games.

    In game conditions, the image must be calculated and lined up as quickly as possible, for example, when the user moves around a location.

    And although this does not happen “from scratch” and there are some initial voluminous preparations, it is still precisely because of this feature that computer games of this type place a very large load on the computer hardware.

    If there is a failure in this case, the picture may change and distort, unloaded pixels may appear, and when the user (character) performs any actions, the picture may not actually change completely or partially.

    In real time, such an engine works in games because it is impossible to predict the nature of the actions, the direction of the player’s movement, etc. (although there are the most likely scenarios that have been worked out).

    For this reason, the engine has to process the image at a speed of 25 frames per second., since even when the speed is reduced to 20 frames per second, the user will feel discomfort, as the picture will begin to twitch and slow down.

    In all this, the optimization process plays a very important role, that is, the measures that developers take to reduce the load on the engine and increase its performance during the game.

    For this reason, smooth rendering requires, first of all, a texture map and some acceptable graphics simplifications.

    Such measures help reduce the load on both the engine and the computer hardware., which ultimately leads to the game being easier to launch, simpler and faster.

    It is the quality of optimization of the render engine that largely determines how stable the game is and how realistic everything that happens looks.

    Preliminary

    This type is used in situations where interactivity is not important.

    For example, this type is widely used in the film industry, when designing any model of limited functionality, for example, intended only to be viewed using a PC.

    That is, this is a more simplified approach, which is also possible, for example, in design - that is, in situations where the user’s actions do not need to be guessed, since they are limited and calculated in advance (and with this in mind, rendering can be performed in advance).

    In this case, the load when viewing the model falls not on the program engine, but on the PC’s central processor. At the same time, the quality and speed of image construction depend on the number of cores, the state of the computer, its performance and the CPU.

    02Oct

    What is Render (Rendering)

    Render (Rendering) is the process of creating a final image or sequence of images from two-dimensional or three-dimensional data. This process occurs using computer programs and is often accompanied by difficult technical calculations that fall on the computing power of the computer or on its individual components.

    The rendering process is present in one way or another in various areas of professional activity, be it the film industry, the video game industry, or video blogging. Often, rendering is the last or penultimate stage in working on a project, after which the work is considered completed or needs a little post-processing. It is also worth noting that often the rendering is not the rendering process itself, but rather the already completed stage of this process or its final result.

    the words "Render".

    The word Render (Rendering) is Anglicism, which is often translated into Russian with the word “ Visualization”.

    What is 3D Rendering?

    Most often, when we talk about rendering, we mean rendering in 3D graphics. It’s worth noting right away that in fact, in 3D rendering there are no three dimensions as such, which we can often see in the cinema wearing special glasses. The prefix “3D” in the name rather tells us about the method of creating a render, which uses 3-dimensional objects created in computer programs for 3D modeling. Simply put, in the end we still get a 2D image or a sequence of them (video) that was created (rendered) based on a 3-dimensional model or scene.

    Rendering is one of the most technically difficult stages in working with 3D graphics. To explain this operation in simple language, we can give an analogy with the work of photographers. In order for a photograph to appear in all its glory, the photographer needs to go through some technical stages, for example, developing film or printing on a printer. 3D artists are burdened with approximately the same technical stages, who, to create the final image, go through the stage of setting up the render and the rendering process itself.

    Construction of the image.

    As mentioned earlier, rendering is one of the most difficult technical stages, because during rendering there are complex mathematical calculations performed by the render engine. At this stage, the engine translates the mathematical data about the scene into the final 2D image. The process converts the 3D geometry, textures, and lighting data of the scene into the combined color value information of each pixel in a 2D image. In other words, the engine, based on the data it has, calculates what color each pixel of the image should be colored to obtain a complex, beautiful and complete picture.

    Main rendering types:

    Globally, there are two main types of rendering, the main differences of which are the speed with which the image is calculated and finalized, as well as the quality of the image.

    What is Real Time Rendering?

    Real-time rendering is often widely used in game and interactive graphics, where the image must be rendered as quickly as possible and displayed in its final form on the monitor display instantly.

    Since the key factor in this type of rendering is interactivity on the part of the user, the image must be rendered without delay and in almost real time, since it is impossible to accurately predict the behavior of the player and how he will interact with the game or interactive scene. In order for an interactive scene or game to run smoothly without jerks and slowness, the 3D engine has to render the image at a speed of at least 20-25 frames per second. If the rendering speed is below 20 frames, the user will feel discomfort from the scene, observing jerks and slow movements.

    The optimization process plays a big role in creating smooth rendering in games and interactive scenes. In order to achieve the desired rendering speed, developers use various tricks to reduce the load on the render engine, trying to reduce the forced number of miscalculations. This includes reducing the quality of 3D models and textures, as well as recording some light and relief information into pre-baked texture maps. It is also worth noting that the main part of the load when calculating rendering in real time falls on specialized graphics equipment (video card - GPU), which allows you to reduce the load on the central processing unit (CPU) and free up its computing power for other tasks.

    What is Pre-Render?

    Pre-rendering is used when speed is not a priority and there is no need for interactivity. This type of rendering is most often used in the film industry, in working with animation and complex visual effects, as well as where photorealism and very high picture quality are needed.

    Unlike real-time rendering, where the main load fell on graphics cards (GPUs). In pre-rendering, the load falls on the central processing unit (CPU) and the rendering speed depends on the number of cores, multi-threading and processor performance.

    It often happens that the rendering time for one frame takes several hours or even several days. In this case, 3D artists practically do not need to resort to optimization, and they can use the highest quality 3D models, as well as texture maps with very high resolution. As a result, the picture turns out much better and more photo-realistic compared to real-time rendering.

    Rendering programs.

    Now, there are a large number of rendering engines on the market, which differ in speed, image quality and ease of use.

    As a rule, render engines are built into large 3D graphics programs and have enormous potential. Among the most popular 3D programs (packages) there is such software as:

    • 3ds Max;
    • Maya;
    • Blender;
    • Cinema 4d and etc.

    Many of these 3D packages have render engines already included. For example, the Mental Ray render engine is present in the 3Ds Max package. Also, almost any popular render engine can be connected to most well-known 3D packages. Among the popular render engines are the following:

    • V-ray;
    • mental ray;
    • Corona renderer and etc.

    I would like to note that although the rendering process has very complex mathematical calculations, developers of 3D rendering programs are trying in every possible way to save 3D artists from working with the complex mathematics underlying the rendering program. They try to provide relatively easy-to-understand parametric render settings, as well as material and lighting sets and libraries.

    Many render engines have found fame in certain areas of working with 3D graphics. For example, “V-ray” is very popular among architectural visualizers, due to the availability of a large number of materials for architectural visualization and, in general, good render quality.

    Visualization methods.

    Most render engines use three main calculation methods. Each of them has both its advantages and disadvantages, but all three methods have the right to be used in certain situations.

    1. Scanline (scanline).

    Scanline render is the choice of those who prioritize speed over quality. Due to its speed, this type of rendering is often used in video games and interactive scenes, as well as in viewports of various 3D packages. With a modern video adapter, this type of rendering can produce a stable and smooth image in real time with a frequency of 30 frames per second and higher.

    Work algorithm:

    Instead of rendering "pixel by pixel", the algorithm of the "scanline" renderer is that it determines the visible surface in 3D graphics, and working on a "row by row" principle, first sorts the polygons needed for rendering by the highest Y coordinate, which belongs to to a given polygon, after which each row of the image is calculated by intersecting the row with the polygon that is closest to the camera. Polygons that are no longer visible are removed as you move from one row to the next.

    The advantage of this algorithm is that there is no need to transfer the coordinates of each vertex from the main memory to the working memory, and the coordinates of only those vertices that fall within the visibility and rendering zone are translated.

    2. Raytrace (raytrace).

    This type of rendering is created for those who want to get a picture with the highest quality and detailed rendering. Rendering of this particular type is very popular among fans of photorealism, and it is worth noting that it is not without reason. Quite often, with the help of ray trace rendering, we can see stunningly realistic shots of nature and architecture, which not everyone can distinguish from photographs; moreover, the ray trace method is often used when working on graphics in CG trailers or films.

    Unfortunately, for the sake of quality, this rendering algorithm is very slow and cannot yet be used in real-time graphics.

    Work algorithm:

    The idea of ​​the Raytrace algorithm is that for each pixel on a conventional screen, one or more rays are traced from the camera to the nearest three-dimensional object. The light beam then travels through a certain number of bounces, which may include reflections or refractions depending on the scene materials. The color of each pixel is calculated algorithmically based on the interaction of the light ray with objects in its traced path.

    Raycasting method.

    The algorithm works on the basis of “throwing” rays as if from the eye of the observer, through each pixel of the screen and finding the nearest object that blocks the path of such a ray. Using the properties of the object, its material and scene lighting, we obtain the desired pixel color.

    It often happens that the “ray tracing method” (raytrace) is confused with the “ray casting” method. But in fact, “raycasting” (the method of casting a ray) is actually a simplified “raytrace” method, in which there is no further processing of stray or broken rays, and only the first surface in the path of the ray is calculated.

    3. Radiosity.

    Instead of a "ray tracing" method, the rendering in this method works independently of the camera and is object-oriented, unlike the "pixel by pixel" method. The main function of “radiosity” is to more accurately simulate surface color by taking into account indirect illumination (scattered light bounce).

    The advantages of “radiosity” are soft graduated shadows and color reflections on an object coming from neighboring objects with bright colors.

    It is a fairly popular practice to use Radiosity and Raytrace together to achieve the most impressive and photorealistic renders.

    What is Video Rendering?

    Sometimes, the expression “render” is used not only when working with 3D computer graphics, but also when working with video files. The video rendering process begins when the video editor user has finished working on the video file, set all the parameters he needs, audio tracks and visual effects. Basically, all that's left is to combine everything we've done into one video file. This process can be compared to the work of a programmer when he has written the code, after which all that is left is to compile all the code into a working program.

    Like a 3D designer or a video editor, the rendering process occurs automatically and without user intervention. All that is required is to set some parameters before starting.

    The speed of video rendering depends on the length and quality required of the output. Basically, most of the calculation falls on the power of the central processor, therefore, the speed of video rendering depends on its performance.

    Categories: , // from

    Many people often have questions about improving the visual quality of renderings in 3ds Max and reducing the time spent on them. The main tips that can be given to answer this question relate to optimizing geometry, materials and textures.

    1. Optimizing the geometry of 3D models
    During the modeling process, it is necessary to adhere to the minimum possible number of polygons, because if the model contains many unnecessary polygons, this entails an increase in rendering time.

    Avoid mistakes in model geometry, such as open edges, overlapping polygons. Try to keep the models as clean as possible.

    2. What should the textures be like? The texture size should match the size of the model in the final render. For example, if you downloaded a texture somewhere with a resolution of 3000 x 3000 pixels, and the model you are applying it to is in the background of the scene or has a very small scale, then the renderer will be overloaded with excessive texture resolution.

    Take a look at this example render:

    It should be kept in mind that to enhance realism, maps must be added to the materials Bump(Irregularities) and Specular(Mirror reflections), since in reality every object has relief and reflectivity. Creating such maps from an original texture will not be a problem - superficial knowledge is enough Adobe Photoshop.

    Correct lighting

    An extremely important point. Always try to use physical lighting systems that are close to real life, such as the Daylight System and VRay Sun And Sky, HDRI, and use photometric ones with IES profiles as light sources in interiors. This will add realism to the scene, since in this case real algorithms for calculating light information will be used during rendering.

    Don't forget about gamma correction of images! With a gamma of 2.2, colors will appear correctly in 3ds Max. However, you can only see them like this if your monitor is properly calibrated.



    4. Scene scale
    To obtain renders of decent quality, the scale of the units of measurement in the scene is of enormous importance. Most often, we work in centimeters. This not only allows you to create more accurate models, but also helps with lighting and reflection calculations.

    5. Visualization settings
    If you work with VRay, then to smooth the edges of the image it is recommended to use Adaptive DMC. However, for best results in scenes with a lot of detail and lots of blurry reflections, it is better to use Fixed- it works best with this type of image. It is advisable to set the number of subdivisions to at least 4, and preferably 6.
    To calculate indirect illumination (Indirect Illumination), use the link Irradiance Map + Light Cache. This tandem allows you to quickly calculate the illumination in the scene, but if you want more detail, you can enable the option Detail Enhancement(Improved detail) in the Irradiance Map settings, and activate in Light Cache Pre-Filter(Prefiltration). This way you can reduce the noise in the picture.
    Good shadow quality can be achieved by setting the number of subdivisions in the VRay light source settings to 15-25. Additionally, always use a physical VRay camera, which gives you complete control over how the light is presented in the scene.
    And for complete control over white balance, try working in the Kelvin temperature scale. For reference, here is a table of temperatures that will be useful to use when working in 3ds Max (lower values ​​mean warmer/redder tones, while higher values ​​give cooler/blue tones):
    Kelvin color temperature scale for the most common light sources

    • Burning candle - 1900K
    • Halogen lamps - 3200K
    • Flood lamps and modeling light - 3400K
    • Sunrise - 4000K
    • Fluorescent light (cool white) - 4500K
    • Daylight - 5500K
    • Camera flash - 5500K
    • Studio light - 5500K
    • Light from a computer monitor screen - 5500-6500K
    • Fluorescent lamp - 6500K
    • Open shadow (term from photography) - 8000K
    Correcting pale colors in 3ds Max at gamma 2.2
    When using the 2.2 gamma in Autodesk 3ds Max, one immediately notices that the material colors in the Material Editor look overly bright and washed out compared to the usual 1.0 gamma presentation. And if you absolutely need to observe the color values ​​on the RGB scale in the scene, let’s say in some lesson the color values ​​are already given, or the customer provided his samples of objects in the given colors, then in the 2.2 gamma they will look incorrect. Correction of RGB colors in gamma 2.2 In order to achieve the correct brightness level of a color, you need to reassign its RGB values ​​using a simple equation: new_color=255*((old_color/255)^2.2). The equation states that to get a new color value in the 2.2 RGB gamma, you need to divide the old RGB value by the value of white (255), raise it all to the power of 2.2, and then multiply the resulting value by the value of white (255). If math is not your thing, don't despair - 3ds Max will do the math for you, because it has a built-in Numeric Expression Evaluator calculator. The result of an expression (mathematical function) returns a certain value. The resulting value can then be inserted into any field of the program, be it the parameters for creating a new object, its transformation, setting modifiers, materials. Let's try to calculate color in gamma 2.2 in practice. Inside the material settings, click on the color field to bring up the Color Selector window. Once you have selected a color, place your mouse cursor in the Red channel field and press Ctrl+N on your keyboard to bring up the Numerical Expression Evaluator. Write the following formula inside it, substituting the old color value in the Red channel. The Result field displays the solution to the equation. Click the Paste button to paste the new value in place of the old one into the Red channel. Do this operation with the Green and Blue color channels. With corrected RGB values, colors will look correct both in the projection windows and in the render. Working with colors using the CMYK scheme You don't always have to deal only with RGB. Sometimes there are CMYK print colors that need to be converted to RGB because 3ds Max only supports. You can, of course, launch Adobe Photoshop and translate the values ​​in it, but there is a more convenient way. A new type of color selector has been created for 3ds Max - Cool Picker, which allows you to see color values ​​in all possible color schemes directly in Max. Download the Cool Picker plugin from here for your version of 3ds Max. It is installed very simply: the file itself with the dlu extension must be placed in the 3ds Max\plugins folder. You can make it active by going to Customize > Preferences > General tab > Color Selector: Cool Picker. Thus, it will replace the standard color selector. Have questions? Ask

    Beginning of the form

    Using gamma 2.2 in 3ds max + V-Ray in practice

    After the theoretical part on setting gamma in V-Ray and 3ds max, we move directly to practice.

    Many 3ds max users, especially those who are faced with interior visualization, notice that when setting physically correct lighting, certain places in the scene are still darkened, although in fact everything should be well lit. This is especially noticeable in the corners of geometry and on the shadow side of objects.

    Everyone tried to solve this problem in different ways. Beginning 3ds max users first tried to correct this by simply increasing the brightness of the light sources.

    This approach brings certain results, the overall illumination of the scene increases. However, it also leads to unwanted overexposure caused by these light sources. This does not change the situation with an unrealistic image for the better. One problem with darkness (in places difficult to reach light) is replaced by another problem with overexposure (near light sources).

    Some people have come up with more complex ways to "solve" the problem by adding additional lights to the scene, making them invisible to the camera to simply illuminate dark areas. At the same time, there is no longer any need to talk about any realism and physical accuracy of the image. In parallel with the illumination of dark places, shadows disappeared, and it seemed as if the objects in the scene were floating in the air.

    All of the above methods of dealing with implausible darkness are too straightforward and obvious, but ineffective.

    The essence of the problem with dark renderings is that the gamma values ​​of the image and the monitor are different.

    What is gamma?
    Gamma is the degree of non-linearity in the transition of color from dark to bright values. From a mathematical point of view, the value of linear gamma is 1.0, which is why programs such as 3ds max, V-Ray, etc. perform calculations in gamma 1.0 by default. But a gamma value of 1.0 is only compatible with an “ideal” monitor, which has a linear dependence of the color transition from white to black. But since such monitors do not exist in nature, the actual gamma is nonlinear.

    The gamma value for the NTSC video standard is 2.2. For computer displays, the gamma value is typically between 1.5 and 2.0. But for convenience, the nonlinearity of the color transition on all screens is considered equal to 2.2.

    When a monitor with a 2.2 gamma displays an image whose gamma is 1.0, we see dark colors in the 1.0 gamma instead of the expected bright colors in the 2.2 gamma. Therefore, colors in the middle range (Zone 2) become darker when viewing a 1.0 gamma image on a 2.2 gamma output device. However, in the dark range (Zone 1), the 1.0 and 2.2 gamma representations are very similar, allowing shadows and blacks to be rendered correctly.

    In areas with light tones (Zone 3) there are also many similarities. Consequently, a bright image with a gamma of 1.0 will also be displayed quite correctly on a monitor with a gamma of 2.2.

    And so to get a proper 2.2 gamma output, the gamma of the original image must be changed. Of course, this can also be done in Photoshop by adjusting the gamma there. But this method can hardly be called convenient when you change the image settings every time, save them to your hard drive, and edit them in a raster editor... Because of this, we will not consider this option, and besides, this method has even more significant disadvantages. Modern rendering tools, such as V-Ray, calculate the image adaptively, so the accuracy of the calculation depends on many parameters, including the brightness of the light in a certain area. Thus, in places with shadows, V-Ray calculates the illumination of the image less accurately, and such places themselves become noisy. And in bright and clearly visible areas of the image, visualization calculations are carried out with greater accuracy and with a minimum of artifacts. This allows for faster rendering by saving time on subtle areas of the image. Raising the gamma of the output image in Photoshop changes the brightness of parts of the image that V-Ray considered less significant and lowered the quality of their calculations. In this way, all unwanted artifacts become visible, and the picture looks simply terrible, but brighter than before. In addition, the range of textures will also change, they will look faded and colorless.

    The only correct way out of this situation is to change the gamma value in which the V-Ray renderer works. This way you will get acceptable brightness in the midtones, where there will be no such obvious artifacts.

    The lesson will show how gamma is adjusted in the V-Ray and 3ds max visualizer.

    To change the gamma that V-Ray will work with, just find the drop-down tab V-Ray: Color Mapping, which is located on the V-Ray tab, which in turn is located in the window Render Scene(F10), and set the value Gamma: in 2.2.

    The peculiarity of V-Ray is that color display gamma correction only works in the V-Ray Frame Buffer, so if you want to see the results of your gamma manipulations, you must enable the frame buffer V-Ray: Frame Buffer on the V-Ray tab.

    After this, the rendering will take place with the 2.2 gamma we need, with normally illuminated midtones. There is another drawback, and that is that the textures used in the scene will appear lighter, they will be discolored and faded.

    Almost all the textures we use look fine on the monitor because they are already adjusted by the monitor itself and initially have a 2.2 gamma. In order for the V-Ray renderer to configure gamma 2.2 and not multiply the image gamma by the gamma value in the renderer (2.2 * 2.2), textures must be at gamma 1.0. Then, after they are corrected by the visualizer, their gamma will become equal to 2.2.

    You can make all textures darker by changing their gamma from 2.2 to 1.0 in Photoshop, with the expectation of further lightening them with the renderer. However, this method would be very tedious and would require time and patience to ensure that all the textures in the scene are in 1.0 gamma, and secondly, it would make it impossible to view the textures in normal gamma because they would be darkened all the time.

    To avoid this, we will simply ensure that they are configured at the 3ds max input. Luckily, 3ds max comes with plenty of gamma settings. Gamma settings are available from the 3ds max main menu:

    Customize > Preferences ...> Gamma and LUTs

    The main gamma settings for 3ds max are located on the Gamma and LUT tab. Specifically, we'll need an input texture correction setting called Input Gamma. We should not be misled by the fact that the default value there is 1.0. This is not a correction value, but the input texture gamma value. By default, it is assumed that all textures are set to 1.0 gamma, but in fact, as mentioned earlier, they are set to 2.2 gamma. And that means we must specify a gamma value of 2.2, instead of 1.0.

    Don't forget to check the box Enable Gamma/LUT Correction to access gamma settings.

    Images taken in the correct gamma look much better and more accurate than those obtained using the settings described at the beginning of the article. They have correct halftones, there are no bright overexposures near light sources, and there are no artifacts in unlit areas of the image. This way the textures will also be rich and vibrant.

    It seems that’s it, but at the end of the lesson I would like to talk about one more thing about working with gamma. Since the V-Ray visualizer works in an unusual gamma, you have to set the 3ds max gamma display mode to 2.2 so that the colors are in Material Editor And Color Selector were displayed correctly. Otherwise, it can get confusing when materials are set to 1.0 gamma but are actually converted to 2.2 gamma within the program.

    To set the correct display of materials in the 3ds max material editor, you should use the settings in the Gamma and LUT tab. To do this, the gamma value must be set to 2.2 in the Display section and the Affect Color Selectors and Affect Material Editor in the Materials and Colors section must be checked.

    Gamma 2.2 has already become the standard when working with 3ds max and V-Ray. I hope that this material will help you in your work!

    Rendering

    As a result, four groups of methods have been developed that are more efficient than modeling all the light rays illuminating the scene:

    • Rasterization(English) rasterization ) together with the string scanning method (eng. scanline rendering). Rendering is done by projecting scene objects onto a screen without considering the effect of perspective relative to the observer.
    • Ray casting (raycasting) (English) ray casting). The scene is considered as observed from a certain point. From the observation point, rays are directed to objects in the scene, with the help of which the color of a pixel on a two-dimensional screen is determined. In this case, the rays stop propagating (unlike the backtracing method) when they reach any object in the scene or its background. It is possible to use some very simple ways to add optical effects. The perspective effect is achieved naturally when the rays thrown are launched at an angle depending on the position of the pixel on the screen and the maximum viewing angle of the camera.
    • Ray tracing(English) ray tracing ) is similar to the ray throwing method. From the observation point, rays are directed to objects in the scene, with the help of which the color of a pixel on a two-dimensional screen is determined. But at the same time, the beam does not stop spreading, but is divided into three components, rays, each of which contributes to the color of the pixel on a two-dimensional screen: reflected, shadow and refracted. The number of such divisions into components determines the tracing depth and affects the quality and photorealism of the image. Due to its conceptual features, the method allows one to obtain very photorealistic images, but at the same time it is very resource-intensive, and the visualization process takes significant periods of time.
    • Path tracing(English) path tracing ) contains a similar principle of ray propagation tracing, but this method is the closest to the physical laws of light propagation. It is also the most resource-intensive.

    Advanced software usually combines several techniques to produce high-quality and photorealistic images at an acceptable cost of computing resources.

    Mathematical justification

    The implementation of the rendering engine is always based on a physical model. The calculations performed relate to one or another physical or abstract model. The basic ideas are easy to understand but difficult to apply. Typically, the final elegant solution or algorithm is more complex and contains a combination of different techniques.

    Basic equation

    The key to the theoretical basis of rendering models is the rendering equation. It is the most complete formal description of the part of rendering that is not related to the perception of the final image. All models represent some kind of approximate solution to this equation.

    The informal interpretation is as follows: The amount of light radiation (L o) emanating from a certain point in a certain direction is its own radiation and reflected radiation. Reflected radiation is the sum in all directions of incoming radiation (L i), multiplied by the reflection coefficient from a given angle. Combining in one equation the incoming light with the outgoing light at one point, this equation constitutes a description of the entire luminous flux in a given system.

    Rendering software - renderers (visualizers)

    • 3Delight
    • AQSIS
    • BMRT (Blue Moon Rendering Tools) (discontinued)
    • BusyRay
    • Entropy (discontinued)
    • Fryrender
    • Gelato (development discontinued due to purchase of NVIDIA, mental ray)
    • Holomatix Renditio (interactive ray tracer)
    • Hypershot
    • Keyshot
    • Mantra renderer
    • Meridian
    • Pixie
    • RenderDotC
    • RenderMan (PhotoRealistic RenderMan, Pixar’s RenderMan or PRMan)
    • Octane Render
    • Arion Renderer

    Renderers that work in real (or near real) time.

    • VrayRT
    • Shaderlight
    • Showcase
    • Rendition
    • Brazil IR
    • Artlantis Render
    3D modeling packages with their own renderers
    • Autodesk 3ds Max (Scanline)
    • e-on Software Vue
    • SideFX Houdini
    • Terragen, Terragen 2

    Render properties comparison table

    RenderMan mental ray Gelato (discontinued) V-Ray finalRender Brazil R/S Turtle Maxwell Render Fryrender Indigo Renderer LuxRender Kerkythea YafaRay
    compatible with 3ds Max Yes, via MaxMan built in Yes Yes Yes Yes No Yes Yes Yes Yes Yes No
    Maya compatible Yes, via RenderMan Artist Tools built in Yes Yes Yes No Yes Yes Yes Yes Yes No
    Softimage compatible Yes, via XSIMan built in No Yes No No No Yes Yes Yes Yes No
    Houdini compatible Yes Yes No No No No No No Yes Yes No No
    LightWave compatible No No No No No No No Yes Yes No No No
    Blender compatible No No No No No No No No No Yes Yes Yes Yes
    compatible with SketchUp No No No Yes No No No Yes Yes Yes No Yes No
    Cinema 4D compatible Yes (starting from version 11) Yes No Yes Yes No No Yes Yes Yes Yes No, frozen No
    platform Microsoft Windows, Linux, Mac OS X Microsoft Windows, Linux, Mac OS X
    biased, unbiased (without assumptions) biased biased biased biased biased biased biased unbiased unbiased unbiased unbiased
    scanline Yes Yes Yes No No No No No No No No
    raytrace very slow Yes Yes Yes Yes Yes Yes No No No No Yes
    Global Illumination algorithms or your own algorithms Photon, Final Gather (Quasi-Montecarlo) Light Cash, Photon Map, Irradiance Map, Brute Force (Quasi-Montecarlo) Hyper Global Illumination, Adaptive Quasi-Montecarlo, Image, Quasi Monte-Carlo Quasi-Montecarlo, PhotonMapping Photon Map, Final Gather Metropolis Light Transport Metropolis Light Transport Metropolis Light Transport Metropolis Light Transport, Bidirectional Path Tracing
    Camera - Depth of Field (DOF) Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
    Camera - Motion Blur (vector pass) very fast Yes fast Yes Yes Yes Yes Yes Yes Yes Yes Yes
    Displacement fast Yes fast slow, 2d and 3d slow No fast Yes Yes Yes Yes
    Area Light Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
    Glossy Reflect/Refract Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
    SubSurface Scattering (SSS) Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes
    Standalone Yes Yes Yes 2005 (raw) No No No Yes Yes Yes
    Current version 13.5,2,2 3.7 2.2 2.02a Stage-2 2 4.01 1.61 1.91 1.0.9 v1.0-RC4 Kerkythea 2008 Echo 0.1.1 (0.1.2 Beta 5a)
    year of issue 2000 (?) (?) 2006 2011 2008
    materials library No 33 My mental Ray No 2300+ vray-materials 30 of. website 113 of. website No 3200+ of. website 110 of. website 80 of. website 61 of. website
    based on technology liquidlight Metropolis Light Transport
    normal mapping
    IBL/HDRI Lighting Yes
    Physical sky/sun Yes Yes
    official site MaxwellRender.com Freerender.com IndigoRenderer.com LuxRender.net kerkythea.net YafaRay.org
    manufacturer country USA Germany USA Bulgaria Germany USA Sweden Spain Spain
    cost $ 3500 195 free 1135 (Super Bundle) 999 (Bundle) 899 (Standard) 240 (Educational) 1000 735 1500 995 1200 295€ free, GNU free free, LGPL 2.1
    main advantage Baking high speed (not very high quality) free free free
    manufacturer company Pixar mental images (since 2008 NVIDIA) NVIDIA Chaos Group Cebas SplutterFish Illuminate Labs Next Limit Feversoft

    see also

    • Algorithms using z-buffer and Z-buffering
    • Artist's algorithm
    • Line-by-line scanning algorithms like Reyes
    • Global Illumination Algorithms
    • Emissivity
    • Text as image

    Chronology of the most important publications

    • 1968 Ray casting(Appel, A. (1968). Some techniques for shading machine renderings of solids. Proceedings of the Spring Joint Computer Conference 32 , 37-49.)
    • 1970 Scan-line algorithm(Bouknight, W. J. (1970). A procedure for generation of three-dimensional half-tone computer graphics presentations. Communications of the ACM)
    • 1971 Gouraud shading Gouraud, H. (1971). Computer display of curved surfaces. IEEE Transactions on Computers 20 (6), 623-629.)
    • 1974 Texture mapping PhD thesis, University of Utah.)
    • 1974 Z-buffer(Catmull, E. (1974). A subdivision algorithm for computer display of curved surfaces. PhD thesis)
    • 1975 Phong shading(Phong, B-T. (1975). Illumination for computer generated pictures. Communications of the ACM 18 (6), 311-316.)
    • 1976 Environment mapping(Blinn, J.F., Newell, M.E. (1976). Texture and reflection in computer generated images. Communications of the ACM 19 , 542-546.)
    • 1977 Shadow volumes(Crow, F.C. (1977). Shadow algorithms for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1977) 11 (2), 242-248.)
    • 1978 Shadow buffer(Williams, L. (1978). Casting curved shadows on curved surfaces. 12 (3), 270-274.)
    • 1978 Bump mapping Blinn, J.F. (1978). Simulation of wrinkled surfaces. Computer Graphics (Proceedings of SIGGRAPH 1978) 12 (3), 286-292.)
    • 1980 BSP trees(Fuchs, H. Kedem, Z.M. Naylor, B.F. (1980). On visible surface generation by a priori tree structures. Computer Graphics (Proceedings of SIGGRAPH 1980) 14 (3), 124-133.)
    • 1980 Ray tracing(Whitted, T. (1980). An improved illumination model for shaded display. Communications of the ACM 23 (6), 343-349.)
    • 1981 Cook shader(Cook, R.L. Torrance, K.E. (1981). A reflectance model for computer graphics. Computer Graphics (Proceedings of SIGGRAPH 1981) 15 (3), 307-316.)
    • 1983 Mipmaps(Williams, L. (1983). Pyramidal parametrics. Computer Graphics (Proceedings of SIGGRAPH 1983) 17 (3), 1-11.)
    • 1984 Octree ray tracing(Glassner, A.S. (1984). Space subdivision for fast ray tracing. 4 (10), 15-22.)
    • 1984 Alpha compositing(Porter, T. Duff, T. (1984). Compositing digital images. 18 (3), 253-259.)
    • 1984 Distributed ray tracing(Cook, R.L. Porter, T. Carpenter, L. (1984). Distributed ray tracing. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3), 137-145.)
    • 1984 Radiosity(Goral, C. Torrance, K.E. Greenberg, D.P. Battaile, B. (1984). Modeling the interaction of light between diffuse surfaces. Computer Graphics (Proceedings of SIGGRAPH 1984) 18 (3), 213-222.)
    • 1985 Hemi-cube radiosity(Cohen, M.F. Greenberg, D.P. (1985). The hemi-cube: a radiosity solution for complex environments. Computer Graphics (Proceedings of SIGGRAPH 1985) 19 (3), 31-40.)
    • 1986 Light source tracing(Arvo, J. (1986). Backward ray tracing. SIGGRAPH 1986 Developments in Ray Tracing course notes)
    • 1986 Rendering equation(Kajiya, J.T. (1986). The rendering equation. Computer Graphics (Proceedings of SIGGRAPH 1986) 20 (4), 143-150.)
    • 1987 Reyes algorithm(Cook, R.L. Carpenter, L. Catmull, E. (1987). The reyes image rendering architecture. Computer Graphics (Proceedings of SIGGRAPH 1987) 21 (4), 95-102.)
    • 1991 Hierarchical radiosity(Hanrahan, P. Salzman, D. Aupperle, L. (1991). A rapid hierarchical radiosity algorithm. Computer Graphics (Proceedings of SIGGRAPH 1991) 25 (4), 197-206.)
    • 1993 Tone mapping(Tumblin, J. Rushmeier, H. E. (1993). Tone reproduction for realistic computer generated images. IEEE Computer Graphics & Applications 13 (6), 42-48.)
    • 1993 Subsurface scattering Hanrahan, P. Krueger, W. (1993). Reflection from layered surfaces due to subsurface scattering. Computer Graphics (Proceedings of SIGGRAPH 1993) 27 (), 165-174.)
    • 1995 Photon mapping(Jensen, H.J. Christensen, N.J. (1995). Photon maps in bidirectional Monte Carlo ray tracing of complex objects. Computers & Graphics 19 (2), 215-224.)
    • 1997 Metropolis light transport(Veach, E. Guibas, L. (1997). Metropolis light transport. Computer Graphics (Proceedings of SIGGRAPH 1997) 16 65-76.)