GUI design guidelines
Architecture of a C++ Cross Platform 2D GUI.
In this document and related EGE2D C++ project library, we talks about generic GUI design. In order to procede indeep in a real context and avoid to be too theoretical, we take as a concrete field of applications that's the automation compartment, specifically speaking for embedded systems, IoT, scada, hmi and also Raspberry or equivalents.
In the next future we will extend the library also for gaming if there will be persons interested in it or you will do it yourself extending the project with all the package of knowledges.
EGE2D: design principles and goals.
- Fast, compact, optimized.
- Easy and user friendly to use (for whom want's to use without go indeep into development details).
- Well done : we try to implement OOP and Design Patterns as best we can.
- Easy in concepts and easy to extend following open/close principle.
- Fast learning curve.
- OpenGLES2 basis but Metal and DirectX can be easily integrated adding relevant methods(future feature).
- As much connected with the rest of the world as much as possible.
Source code: You can DOWNLOAD source code from Github following the links on this page and looking for download. On the same page there are also some Video tutorial on YouTube sections that explanes how to download, prepare the system and compile the library and the samples for different destination platforms.
- Makes large use of Standard Template Library - STL and also worldwide best 3.rd party library like Boost.
- Uses GLFW3 for window handling and I/O.
- Uses Freetype for characters display.
- Uses Jpeglib for Jpeg image handling.
Pourpose of EGE2D as a GUI for automation and embedded systems.
EGE2D GUI is designed to be a portable, multi pourpose Graphic Interface designed primarily for systems remote control from a local graphic context.
This document contains concepts that were contemplated during software design and writes. This helps understanding software architecture itself but also can be a solid basis on witch students or programmers wants to develop their own GUI or extend EGE2D to suite every specific needs.
In summary this documents talks about design of a graphic library. With the project, there are tutorials that helps undestanding how to write and make it up and ready to be used.
We are specialized in software development. You can follow this link if you are interested in software development support also for commercial ones.
Design for a GUI, concepts behind
The GUI as a sequence of pictures.
Main idea behind the GUI is to think it as a dynamic picture that can be changed/modified each time the picture is presented to the user while display. Modern graphic cards can show pictures at high frame rate and infact the GUI library makes all the modification to the picture beetween each frame giving the feel the person that watches the screen to have the systems that react immediately to events.
Human watch is happy to receive at least 24 frames per seconds in order to have a fluid view perception like happens for movies.The principle is the same.
Implementation and description.
The container for the pictures, handy GLFW as Window handler.
The picture cannot alive itself, it needs a support to be displayed. The appropriate support for a GUI where all the contents can be displayed is named Graphic Context(GC) and is created on operating system basis API. To have all this details of window creation and delete on different systems and devices, we adopt the use of GLFW3 that helps us in creating our context in several fashion ways. More than this, GLFW helps us also in inputs and outputs from mouse and keyboards.
Handle images in minutes with texture handler class.
The picture as an image.
Just behind we talks concerning pictures displayed, but what about a picture?
In our GUI the picture is an image of type Jpeg. The magic is to load the jpeg to graphic card through OpenGL when we want to update the content. The image rappresent exactly what the user see to the screen in the actual time at the present frame rappresentation. In our GUI we have created one image on the fly and we have populate each of its pixels with the content we want to present. The easest content to display is to present a single picture and replay the same picture each frame. Obviously in this case the visualization is static and no actions happens on it.
Loading a picture with Jpeg class and reading a file from disk (a window background is a good choice), we receive back a single dimension array of bytes populated with all RGBA pixels colors.Textures are smart and usefull. You can open, modify and save them in minutes.
RGBA colors: In shorts, RGBA is a color format for a single pixel on the screen. In our GUI is composed of four bytes each of wich represents a color component in the pixel: Red one byte, Green one byte, Blue one byte, Alpha one byte. Alpha is transparency coefficient and all the others together makes the visual aspect of a pixel. More informations on RGBA are on Wikipedia and also on the web.
Access Jpeg data with ease.
This section is not mandatory but can helps in image handlings, not just for GUI but for image handling in general. Think about how Gimp software handles overlay. With Gimp you can have multiply layers of images and apply level by level some color operators that add effects to the final visual result. Starting from this philosophy, we designed the texture class that is capable to apply the same effects and more to the textures obtaining a flexible and modular way to add the components we like.This is done in a patternized way adopting "strategy design pattern".
Never played with images? A user friendly class to manage Jpeg. Just half of an hour to work with images, their internal pixels and a class structure that you will be able to extend by yourself.
Widgets as controls. Easy design to place controls into your pages.
What about widgets? Widgets in the GUI are the basis elements that are located on a page and occupy a sub-rectangle of the display. As for example, a widget is a pushbutton that act as a switch for a device (like a led/lamp). Actually, in the EGE2D GUI, widgets are named Controls. Each element that can be placed on a page is a control. We have different controls: imageControl, slideControl, pushbuttonControl and others. Thinking back to the previous concept, that the GUI displays a sequence of images, the widget is simply a texture to be put over the main page in a sub area where the button is located. Which image to put? The answer is the image that reflect the actual status of the widget itself, if the widget is a push button, if released the image to be drawn is a released texture of the pushbutton, if pressed the relevant once. If the widget is a slider control, the image is the position of the slider. Every control has it's own display implementation as a polimorphic aspect. And so what is a widget/control in the GUI? As we said before is a class object that inherits two important parents: shortly one parent contains size and location on the screen that's IControl, the other parent is a multiple texture container that holds the pictures to be displayed(at least two, off status and on status). The specific control itself reimplements the draw() method just to apply proper reaction to display refresh. As example Draw() method for a pushbutton will switch it's behaviour beetween two images stored in it, while slider on the same Draw() method will redraw the same slider texture in a new location. This is a sample of polimorphic behaviour.
Core design. Composite pattern to manage controls redraw.
Almost always there are controls overlay the others. Let's think about the background image that necessarily is overloaded by all the other controls on the page. And so, how the GUI handle this situation? EGE2D implements a hierarchical structure that acts as a frame for controls. The structure is a tree and is realized basing it over the "composite design pattern". In shorts composite pattern is a tree structure on wich you can attach any type of objects. To make this possible, all the controls inherits a node/lief behaviour from composite allowing it to partecipate in the tree view. At the end there is a root node on wich, one by one we attach all the controls we have for our application. When redraw is called, the root draw is executed before the other nodes and leafs. then all the other draw() calls are executed in a recursive manner and in this way we are sure the elements are located on the top of the tree are redrawed after the elements before and this will results in controllable behaviour. In this way a z-order like philosophy is achived. Composite pattern has naturally a method that traverse all the nodes and each time it find a draw() method, it call to execution and the relevant sub rectangle image of the control is putted over the picture to display. Riassuming the process, first we design a page with a root and sub drawable nodes attached (also by an xml descriptor), then we regenerate the picture to display drawing root (that should be the background image) and after that the image is redrawn su-region by sub-region with the content of each control texture to display.