How to directly write in video memory or a HWND window's pixel memory ?

Started by
12 comments, last by scott8 1 year, 9 months ago

Hello,

I am making a UI from scratch as low level as possible for windows and I have hit a road block. (SetPixel isn't an option because its too slow) so : How to directly write in video memory or a HWND window's pixel memory ?

Advertisement

@fleabay Its not the same question

What do you mean by “as low level as possible for windows"?

You don't write directly to video memory. You're about 30 years too late to the industry for that.

The simplest (but slow) approach would probably be what you described, get the window's drawing context and set each pixel one at a time. It is slow because you're making millions of trips out to relatively slow drawing calls. You might also select a pen object and use MoveTo() and LineTo() commands.

Another low level and somewhat faster approach is to draw to a GDI bitmap object, then blit the object to screen.

These are “low level” in that they're primitive to the Windows GDI, not low level as in close to the hardware.

Ever since hardware acceleration of graphics in the early 1990s, it was the graphics card and not the CPU that controlled the display. Starting around 1992 or so with 2D and then 3D, everything moved to the card. The fastest routes involve transferring what you want to draw to the graphics card early, in bulk, then telling the card to render it.

hBitmap whit SetPixelV is extremely slow. And because of the industry now I am forced to learn an overly complicated api that I will hate just to display a f…. bitmap because everything is inaccessible

foxo said:
And because of the industry now I am forced to learn an overly complicated api that I will hate because everything is inaccessible

How difficult for you.

If you truly will hate it, I recommend you find something else in life that brings you joy.

foxo said:
hBitmap whit SetPixelV is extremely slow. And because of the industry now I am forced to learn an overly complicated api

Between this topic and the last topic, it seems like the technical issue is that you are trying to force the hardware to behave in a way that it doesn't like.

That's in addition to the emotional issue. Seriously, get the emotional side in check. If it isn't bringing you joy, you're probably best walking away, at least for a little while.

The technical aspect, computer programmers have always needed to know what's going on with the hardware, they need to understand the computers that they are programming. If you don't understand the computers that you are programming then programming computers will be difficult.

In the case of computer graphics, the model was digital signals through the 70s, then the model shifted to an array in memory through the early 80s, then the model shifted to a window of memory mapped to the graphics cards through the 1980s to early 90s, then in the early 90's passing 2D drawing directly to the hardware, then in late 90s and early 2000s switching everything to the hardware. For the past two decades it's been about reducing bus calls over to the hardware and getting out of the hardware's way.

Back in the days of CGA and EGA monitors when I started, you worked with bit planes and mapping the hardware and the planes together in memory. When VGA graphics came along in '87 with more complex mappings to video card memory it was a huge shift for many people. The system also provided an abstraction, something that looked like a flat array of pixels but "chained mode" was a huge abstraction at the time, and relatively slow. You'd never achieve great computer game performance using the abstraction, but it was approachable. The VESA abstractions and the VESA Local Bus pushed it even further out to the graphics card. So by about the time the 486 came out, game developers were looking for the 486 DX rather than the 486 SX (that is, wanted the CPU that included the floating point processing chip), and VESA graphics cards where the chip did all the actual drawing work rather than the CPU.

When you approach it as a pixel array Windows will quietly hide the details from you, but it will allow a programmer to treat it as though it were a memory buffer and then do all the work behind your back. That is incredibly slow, but it is emulated for historical reasons and for beginners.

If you are looking for high performance and programming the way the hardware wants to be programmed, then you need to stop thinking of the display as a buffer of memory — the way it existed over 30 years ago — and start thinking about it as a high performance card where you pass a sequence of instructions as a list, the shorter the list the faster it will be. Drawing a million pixels one at a time is a list of a million commands. Drawing an object in memory is a single command.

@frob Is there asembly instructions to call the gpu ? I am willing to go that route if I must to have something I understand and fully master whit out surprises insted of premade monolithic apis

To use a car analogy:

Instead of wanting to drive by using the steering wheel and gas pedal, you're asking for instruction on how to inject fuel directly into the piston chamber and steering by pulling on the axel.

Yes, those instructions exist. You can send data over the hardware bus. No, it is the wrong approach, unless your goal is to write hardware drivers. The instructions are “in” and “out”.

I will write a driver since its the only way. I don't see any other proche

Good luck on your journey.

This topic is closed to new replies.

Advertisement