Coolcoder360's Devlog/updates thread
1 Year anniversary of Gruedorfing!
Huzzah, that’s right. I started this stuff on Feb 9th 2021, and now here we are at Feb 9th 2022 (not at time of writing, this might be posted early, I’m writing this on Feb 3rd because I already have things to mention that seem to go well with the 1 year anniversary thing)
Have I gotten a lot done? Well not as much as I would have planned/hoped when I started, I sort of hoped to have completed a few of my planned milestones last year. But I do have good news, I completed 0.0.1! sort of.
0.0.1 milestone complete!
Again, sort of. I didn’t bother finishing testing the scene save/load, I just fixed the errors I was having with my test case of testing material save/load which was to have a lua script manually save/load materials etc, I didn’t let the scene save it all and load it all, I likely have to add material saving still to my scene save/load.
But now I can go on to my next milestone of doing UI things! But I should also probably try to better define what the milestone will look like, the first milestone of 0.0.1 took me so long to wrap up I think I should definitely downscale my expectations for 0.0.2, just because I’ll likely also try to scope creep the next milestone too.
But before we go on to rant about 0.0.2 planning, let’s cover what we have implemented, and what is tested, so far:
- Lua script running, ticking
- Lua APIs to load/save meshes, materials, load textures, add/remove entities and components to entities, bind/map lua functions to input actions
- working input manager that is mostly tested (but with mouse input being broken due to using the wrong type of mouse input)
- PBR Deferred renderer with a forward pass that handles rendering text and will eventually handle rendering transparent/non-opaque objects (transparency/blending not yet implemented or tested)
- Only does direct lighting so far, no IBL, no GI
- 3d point lights do work, no other light types
- In engine versioning that should work with Jenkins
- (Tested, but I don’t trust Jenkins to not entirely break, need to also back up Jenkins configurations etc.)
- docker container which can build engine, working in Jenkins
- Scene loading/saving
- Generic Json serialization/deserialization implementation, can be drop in replaced with other types of serialization
- Material creation, save/load
- mesh save/load, creation from Lua
- Model loading (from files with Assimp) ported but not tested
- EnTT based Entity-Component system
- Hierarchy support added
- Multithreaded Render and Update loops
- Generic FSManager abstracting away all filesystem operations from the rest of the engine
- UTF-8 support for rendering text in Unicode for various languages
- and font loader for loading/rendering fonts
- Text input with my own font loading/input methods.
- used for a lua only console with commands
Wow that is a lot, definitely makes me feel better about how long it took to do all that. I have hopes that now that most of the framework is put together, like the rendering, EnTT, lua scripting, etc, that future features will hopefully be at least a little bit faster to implement, or at least I’ll have less time spent debugging the basic framework and more on the actual feature implementations themselves.
What’s in store for 0.0.2
So initially I had planned for audio to be in 0.0.2, but I think that’s a lot to do Audio, since I also want UI, etc etc. Basically this is the classic case of the product managers wanting everything done yesterday for free, but I’m both the product manager and the dev in this case, so the only person who loses here is me.
So I think here’s my initial plan for 0.0.2:
- UI support with Nuklear
- I think this is important because it means I can make the editor more usable than remembering console commands, combine that with having the scene save/load working at least a bit better, then it should mean I can make, save, and load back scenes in engine, which would be helpful for testing rendering in the future, both efficiency/benchmarking, as well as other tests.
- Add Lua API to load models from files into meshes and materials.
- This way it’s useful for the editor to be able to import more complex meshes instead of either having to manually define the vertices, faces, texcoords, etc.
- Fix Mouselook issue
- add keyboard movement to move around scenes in editor
Basically 0.0.2 will be the “Editor milestone” to get editing things to be a bit more useful, and easy. Get the UI stuff in, make editor UI, make editing more usable than having to accurately type a bunch of commands.
What else is next for the Engine milestones?
Well, based on how long 0.0.1 took and how much stuff was planned for it in my quire project for it, I think it makes sense to try to keep milestones to be smaller, with maybe 2-3 big tasks, and then 1-2 bug fixes/small tasks. This way milestones continue to be a small bite I can feasibly chew in a few months instead of a whole year.
With this logic in mind, I’m trying to basically have a theme for each of the next several milestones, since there are a lot of major chunks I still want to put in before I drop the 0.1.0 release, which I would call the “Barely works but has all the pieces together to make a game in” release.
So the next few releases/milestones are likely to be themed like this:
- 0.0.2 The Editor Milestone
- add everything necessary to make editing easier
- 0.0.3 The Physics Milestone
- add support for creating rigidbodies, kinematic bodies, and physics constraints between the bodies
- 0.0.4 The Audio Milestone
- add in the miniaudio stuff, need to figure out if I want to do low level API or high level Engine API, I might just do low level so I can handle controlling the audio data/assets my own way instead of using whatever their way is, but that means I would need to implement some things myself that they likely pre-can into it.
- 0.0.5 The Asset Packaging/exporting MileStone
- This is basically the zipping all the scripts, configs, assets needed for a game into a .pak/archive file to be loaded later.
Each milestone may have some extra nice to have features or bug fixes sprinkled in, but this means there are 1-2 big tasks for each milestone, and then some (hopefully) smaller tasks sprinkled in.
Those milestones also will get me most of the way through to 0.1.0 with the basics of what is needed in a game:
So then making a game by that point would basically be to go into the editor and add entities for things/worlds, write and hook up lua scripts for crap, and then likely writing an init/setup lua script and then hooking it all together.
Only thing missing there is animation, but in all honestly, if I want to make my racing game idea, I don’t think I really need any animation to have physics cars flying around a track, you just need physics to knock shit around, and maybe you can cheat animations by having obtuse ticking/whatever in lua scripts to update stuff each tick. that would definitely get laggy af most likely but we’ll see. If I can chunk out a piece of sh-, I mean, A golden masterpiece game after my 0.0.5 milestone, I think that’ll help me verify that my engine works. skeletal animation and basic animation can all come later/after the fact. Heck controller support can be after the fact as far as I’m concerned, I’m not expecting this thing to be a masterpiece.
So that’s probably about as granular as I want to get with planning for now, There are some other things like translation support and such that I may want to add/move around but this is the basics of what I would need ot make a basic game, and as long as I don’t need anything animated, I could make a shooter, a marble roller/monkeyball game, or a racing game, or any number of other things. Maybe even a golf game, not that a golf game is the kind of game I necessarily think would be the most exciting to make, but it could also have some interesting value add done to it to add loops, and other things. and Golf It is somewhat a relaxing game, so it may make sense to try making a golf game, and it would likely be easier to make into a multiplayer game because people hardly would care about rubber banding in a golf game compared to a high stakes racing or FPS game.
Okay, now that I’ve laid out what the plan is, time to get into the specifics of the UI. Plan is to basically use Nuklear to do my UI, but nuklear works sort of like imgui, where you basically have to call nuklear functions every single frame for every single UI piece you want to render, that’s great and all but doesn’t really go well with Lua scripting, because I don’t want to have to call into a Lua Script just to get the UI stuff going, it seems like it would make more sense to have a data structure or something that is assembled in Lua, and then Lua only gets called when some input happens.
This is great but I realized that its a bit more complicated than I thought, for a few reasons.
- There’s more to a UI than just “Here’s an ordered list of all the widgets I want” because there’s layout involved. each row in Nuklear can have a different height, different number of columns, etc. and they all go into a window. and in the window you can have panels? and then you can have like, trees where there are children UI stuffs that can be hidden/expanded to see, but the same trees can also be used to display trees apparently? I think?
- It’s not easy to convey this all in something simple, I was hoping for just a window class to represent the UI window and then like, json or something for the rest, but it seems I’ll need some more classes to figure it all out since there are rows, with row height and number of columns, and crap like that.
- You have to call the nuklear commands for each widget to fill the context with what is there, and then you have to parse out some other nuklear commands for what to draw, or you can somehow get what you need to draw in a different format? trying to still figure out how I can render this crap out without tying it to being openGL only since I know my renderer is wayy to involved/connected to everything else right now and I don’t want to make that problem worse for when I want to add GLES 3 or Vulkan. (Yes I want to add at least one of those if not both, GLES 3 would get me mobile and web I believe, Vulkan would also get me mobile support and supposedly if you’re good it can be more efficient, but I believe that to rely on the skill of your implementation and not just from just throwing crap from GL to Vulkan, so I doubt I’d actually have Vulkan be faster)
So I’m not entirely sure yet how I want to do it, I likely will want to get the rendering working with just the windows, and then later I’ll go add in support for the widgets and such.
Likely I’ll end up having some sort of row/bucket type class/struct, and then have that contain various widgets that would be in that row, along with data for the row height, columns, etc.
I’ll also need to figure out how to configure the style/skinning as well, but I think that’s a later feature. I think once I get started I’ll be able to break it into smaller chunks, create more stories in quire for pieces, and maybe rearrange some effort to later milestones, etc.
Had some troubles with porting the Nuklear examples, I haven’t gotten Lua bindings or anything together for this just yet, I basically just ripped the example nuklear rendering code for opengl out of an example, and shoved it into my own little wrapper class, and now I have a window like this:
I consider this some pretty good progress!
Okay the title bar is missing from that one, try this one:
This is great! Basically proves that my UI rendering works, along with the rest of the nuklear pipeline (it’s kind of a lot of setup stuff, very confusing, especially if you want to tweak the UI colors, use images, skin it, etc). I haven’t tested any inputs yet or that my code to shove input through to Nuklear works, but this is a great start.
Next I need to figure out how to store rows/UI widgets in my windows, and then also come up with some sort of Lua API to specify what the inputs should be, as well as get the values/inputs out. that’s likely going to be a bit rough I think, but I’ll see what I can do. After I have basic widgets working and lua bindings together I can then work on fleshing out more of the widgets and then also putting together a better editor UI, then I can really get going on stuff.
some art stuff/side project
I decided I wanted to test out Godot 4.0 Alpha 2 for a little bit to see what was new. So I also spend some time in blender making a few things.
I will say that I don’t think I’m ready to work in Godot 4.0 yet, there were some issues I had importing models to where I’d drag in/add the model to a scene, but the mesh would not show up all of the time. That difficulty made it difficult for me to continue doing just because it was not reliable and I think I found a workaround, but the workaround was too much work for it to be worth it for me to continue.
I did however do some stuff in blender and made this candle model with fully procedural textures (that I plan to bake out and export for my engine, my engine can’t do procedural textures and I think that’s not in scope. for now.)
Not much progress on the engine from last week sadly, partially from being distracted with the Godot 4.0 Alpha 2 build testing I was doing. Some small amount of progress on the lua bindings for the UI stuff though.
Lots of other things going on, trying to buy a house, and it has a bundle of things wrong with it that we need or want to fix/repair before or shortly after moving in.
Water heater isn’t to code and is unable to be replaced without cutting whole in the drywall, failed radon test, dishwasher doesn’t work, garbage disposal doesn’t work, whole house fan vents don’t open properly, tree needs branches removed, yadda yadda yada.
But hey! it’s a house and it has lots of storage. like fit our 3 crock pots and the instapot under the counter and have room for like 6+ months worth of pantry items for us kinds of storage space.
I’ve also realized just how expensive furniture can be… we have a big enough space, and I’ve always wanted to have a bar in the house we would get. Issue is, bar counters and bar cabinets all run well into the $3k range each for the nice ones with reasonable size, and we’re going to want one that locks because I have a 13 (almost 14) yo brother-in-law who we like to have visit, and we don’t need him experimenting with that stuff without our supervision. I told my wife we’d have to get just the best farthest bottom shelf cheapest tequila for him to try if he ever asks about trying alcohol, can’t let him go come across the good stuff while we’re not there otherwise he might end up liking it.
So because of all that expense, cost, and that we want something that locks, I’m considering making a bar cabinet (and probably bar too) myself. I did a little bit of wood working growing up, mostly with my dad since he was the one super into it. I’m not sure I’d be into it enough to get like a tablesaw or anything fancy, but if I could do it with a few hand tools maybe, then I could probably slowly come up with a bar cabinet.
I’ll probably start with looking around for plans or coming up with plans for the bar cabinet in freecad. Maybe I can 3d print some of the hardware myself and then assemble the rest of it out of plywood or MDF. Be on the lookout for some CAD stuff in the future I guess instead of just game dev.
Not much to speak of this time, still slogging through the lua bindings for the UI stuff. I now have it so I can do all the same things before you saw for showing a window, but in lua scripts now.
Still no widgets or anything like that but at least now I have somewhat parity between the C++ and the Lua bindings.
Next I think is implementing lua bindings to do the Rows, which are basically to hold each row of ui elements in the window, and then I’ll need to add the C++ and lua to handle any actual input widgets or labels or progress bars and other such things.
Don’t remember if I’ve explain it before so I’ll explain it again, basically the way Nuklear seems to work, based on my possibly limited understanding, is you essentially have a “window” which is essentially a bucket that is a separate window in the UI, you can configure it to be closable, movable, minimizable, resizable, etc.
Then in that Window you have the widgets, but they’re laid out or organized in rows. how exactly they are laid in rows can be changed based on how you want to layout the gui, but basically what I’m planning to do is to have a Window object that just holds Rows, then have the Rows actually hold the UI components themselves. that way I can programmatically generate the full window and specify layout in lua, and all that happens is my UI Manager calls draw on the window, the window calls the nuklear methods to set up the window, then calls draw on its rows, and then each row does the call to set up the layout, and then calls draw on its widgets, which then just does the nuklear calls to set up the actual widgets (and will then get the input results from Nuklear and then ping back into the configured lua script/method with the input from the widget)
I was reading a factorio blog post recently and they mentioned doing automated end to end testing, so I thought it would be good to think about how that could be done, perhaps there are some other frameworks or something that could be used.
I do think that basically having a way to provide fake input to the engine would be easy enough by just adding some stuff to the input manager, to allow triggering input actions outside of needing glfw to actually register the events. I haven’t fully fleshed out the details of what kind of scripts the tests would be written as, but I think it would be helpful to start at least figuring out what engine modifications might be helpful to doing automated tests.
Things like being able to run in a headless mode, that way you can avoid needing to use GPU and still be able to run multiple tests in parallel on the machine.
Maybe having some sort of “on demand” rendering to allow taking screenshots when needed but otherwise just don’t render anything? I’m trying to figure out how useful automated tests would be for testing actual gameplay vs testing just general engine features to ensure there are no regressions, and I suspect taking screenshots may not be the most useful, because I could imagine that the graphic quality would vary a lot at least while a game is being worked on, but maybe if there was a consistent test case for the engine to verify that the graphics rendering seemed to match at least somewhat closely between different renderers?
Additionally, It might be worth being able to capture state/save the entity/component state to compare, then comparisons of state could be done possibly between specific entities that are having functionality tested on, and it would work regardless of the graphics changing.
So those are some thoughts I had about potential features that might need to be added to facilitate writing automated tests, so hopefully then during development of a game, I could set up some automated tests for that game to test some functionality, maybe to verify that the main menu, pause menu, options menu all work and don’t crash, and then I could maybe add them to all run in Jenkins to be automated like once a week or month.
Moving next week to a new place. lots of stuff going on for that, and not so much game engine progress expected. But once we get the place set up I’ll probably be right back at it trying to get my UI stuff figured out so I can get the editor on track. Once the editor is on track I’m hopeful I could start making some real games perhaps, even without fully workable physics or audio.
Took a bit longer to complete the move and get my desktop set up than expected, but had an unexpected funeral and road closure keeping me in a different state, so the one week I expected to not have any progress turned into 3 weeks.
Still trying to set up my office, need to set up the 3d printer and better route the ethernet cable from the kitchen on the first floor to my office in the basement. I got a 500ft spool of bulk ethernet cable to be able to route that around everything along the walls, and was going to 3d print brackets to hold it in place.
In order to better route the ethernet cable I’m designing some brackets to use to mount it to the wall so I can 3d print them off, figure since I’ve got a house now I may as well make use of the 3d printer to make things.
The bracket is basically just a half circle with a little hole for a nail to go through:
but I made it parametric in FreeCAD so I can change the size of the hole to fit different nails or screws, and the size of the arc to fit different size cables, or the thickness or overlap of the parts to make it more sturdy if needed. Learning FreeCAD has been good, and I think it might help if I wanted to make furniture or other things that may not be 3d printed, but require good design/planning ahead to make sure everything fits together. Blender is nice for 3d but not sure it’s really parametric in the ways I need to make physical objects.
The status on the game engine is pretty same-y same-y, still doing UI Lua bindings and stuff to try to get it to where I can at least add a single widget to a row and add that to a window.
Once I’m that far implementing other widgets shouldn’t be too bad since Nuklear pretty much pre-cans all the hard parts, so I’m hoping it should be smoother at that point, then once I’ve got a few widgets in I should be able to get some sort of editor UI going and then hopefully from there it’s a simple verifying the model loading stuff was ported properly, maybe dress it up a bit to make it easier to fit into the current framework of how meshes work, and then I should be set for the basics of things.
Then I’ll tackle animation, physics, audio, etc in some future milestone, but I could feasibly start making levels/logic/something once the UI/model loading stuff is in. UI styling might need to be done at some point to make it not look awful, but that’s a longer distance worry for now I think.
Been in a little bit of a rut with the engine where I just have a slog of things left to write, make bindings for, etc for the UI stuff.
So I haven’t been super excited to work on the engine between that and all the other crap going on, so I figured it may make sense to start a side project to do something to still do coding/crap but make it so I can come back to the engine with a fresh mind, like how I worked on that dungeon crawler ages ago for a few months.
Effectively nothing has been finished since last time other than making a Label class and lua bindings for it, and filling out the implementation using nuklear, but not testing it. Now I’m trying to do a button but not really excited about it.
New Side Project
So the new side project I decided to do literally 1 minute prior to writing this, basically I was just on my computer clicking around the browser windows as one does when supposed to be doing real work instead, and saw a GOG window that I had left open on the main store page, with a sliver of a screenshot of a game called xn--Heros Hour-nw6e on it. (not sure why that link looks like xn–<name>-nw6e to me but maybe that’s a preview only bug?
Opened it up and I saw the graphics and I thought to myself “making a little top down world map view RPG might be neat, maybe jrpg, maybe not, not sure. let’s open Godot” and so a game idea was born. that is currently only maybe 5 minutes old and will probably die about 5 hours from now.
Anyway figured I’d stop losing for that update, will try to write again next week on how long that new game idea survived.
Side project progress
Going to jump right to the side project, nothing to report for the engine
I’ve got a tileset I’ve been working on, forgot how fun it is to make a pixel art tileset:
The tileset has grass, dirt, sand, dirt/sand path, and paved path, trees, various buildings, and some mountains. It’s meant to have some tiles be on a layer over the top of the base ground layer, which may make the mapping a little bit complicated, but shouldn’t be too horrid.
I think I’m going for more of a jrpg type thing, or maybe sort of like a roguelike, I’m definitely leaning towards using some proc gen (discussed below) for creating maps/overworld, etc, and I’m thinking turn based will be best.
I’m also not aiming to have different heights/levels in the map, that’s just too complicated art wise and mapping wise, but I’m sure it can be faked with setting up tiles in a certain way.
So I’m planning on using proc gen to generate the map, instead of manually placing tiles, that’s nice because then I can use it to create:
- Replayability, you can play the same game multiple times with different layout, different world, and different experiences
- Larger worlds than I care to make manually, which is great because making large world maps, or lots of small interiors sounds like it could get very tedious, so this way I can make a larger world with more crap to do, without spending too much time on all the individual crap, I just make a tool that makes each type of crap and then another tool to tell the crap where to be crapped out at.
I plan to do at least some of this proc gen, for the world map at least, using Wave Function Collapse. You can see one sample implementation and what it does here but I’m planning to make my own implementation in GDScript (for now, maybe I’ll figure out GDNative later if that’s too slow) so that I can understand it, and potentially tweak it if needed.
The gist of wave function collapse (WFC) as was described in one of the several videos I watched on it, is that basically it’s like solving a Sudoku puzzle, you start with a few pre-chosen tiles, either randomly chosen or given to you, and then you go through all the other slots and you can narrow down the options remaining of what value/tile can go there, until either you get to only one possible tile, or as few as can be, then you just randomly choose on from the options that can be there at that tile, and then you update the rest of the slots with the options remaining.
The way you know what tiles can go next to each other is by defining the possible adjacent tiles to each tile, which if you look at the above github repo, can be done by providing a sample or “training” tilemap/texture, which let’s the algorithm know how/where to put different tiles in relation to each other.
One thing to note about WFC is that it is all based on local connectivity, how each individual tile connects to its neighbors, there is not really any stored/saved data about bigger picture things, which means you can very easily end up trying to make a generator to give you square rooms, but wind up getting non-square rooms, or other similar issues, simply because the WFC algorithm doesn’t really have any data on the inside vs outside of the room. This kind of difficulty can be solved though, by including a sample tilemap that uses a different tile for the interiors of each room, compared to the exteriors, that will keep the room interiors internal to the room and prevent the room from sprawling on.
How is WFC different from Wang Tiles?
Wang tiles are something I’ve done in the past and I think mentioned somewhere upwards on this chain, and I wanted to kind of mention how they seem similar at least in theory to WFC. WFC you define the possible adjacent tiles and try to pick a tile that can go in a spot that is adjacent to the others, based on edge connectivity.
In theory, you could probably combine WFC with Wang tiles and be totally fine, so what’s the difference between what I’m trying to do now and what I did before?
Previously, I did Wang tiles, which the ones I used were basically large blocks that were made up of many smaller tiles/pieces, in this case I’m trying to do WFC on those smaller pieces instead of having to make the tiles, and then manually create rooms/chunks with those tiles, I can just focus on making the little tiles, then piece them all together in an example map, and then let WFC do its thing from there.
Another difference between what I did earlier and what I"m doing now, is that previously I did not propagate any possible options through the map when generating a map, instead I defined the edges of my confined/finite map as being closed off, then randomly chose tiles to fit with that, and make sure that the connectivity of any other already placed tiles fit in with the connectivity of the tile chosen. Is there a reason you need to keep track of and propagate the possible tiles instead of just picking from pre-computed connectivity lists where you group all tiles with specific connectivities together? Not as far as I can tell, in fact I think having to propagate your options through would be slower computation wise, however, this lets you go through and find the tile with the lowest number of possible choices, and fill that in and propagate from there, so hopefully (and this is hopefully, I suspect it’s still possible) there will be fewer instances of the algorithm running itself into a corner where no tile exists that can fill a spot, which happened multiple times when doing Wang tiles, but meant in order to get more generation to work, I had to spend a ton of time manually creating new wang tiles to fill the connectivity combinations that I didn’t create yet, or deal with having a 4 way intersection wedged in there to not block off any paths.
I will say that doing the wang tiles I did before did mean I could just do random walk to make sure you could get from point A to point B by pre-defining the necessary connectivity without defining tiles for those slots yet, which let me create a traversable level from start to finish, but here for an overworld I think it should be possible to not run into too many cases where something would be untraversable, or if anything I can pre-define some parts of the overworld to make sure that things are traversable, but then just proc gen the rest.
Also in theory, with WFC I could make an infinitely generated overworld, and then generate each individual place on the fly. Which, that would be great and all but if I want like a classic story, and maybe some sort of sane leveling system, it may make sense to limit things a little bit to prevent things from getting too insane, although I’m not going to rule out quest and story proc gen just yet, but I think for a better experience for a jrpg type game, some intentional design is important instead of just letting proc gen go completely rampant.
Side quests though, that’s probably where I’ll let proc gen go because if the system generates towns/buildings/whatever, then the system can fill those with stuff, I’m not going to hand make everything that is like that. Proc gen for weapons, tools, items etc will probably also be useful.
Next steps on this project, I think getting the WFC actually working. I’ve rambled on about this, but so far I have a file in Godot that doesn’t do much, it stores out the adjacent tiles from a pre-canned 2d array so it knows how to connect things, but it hasn’t figured out how to use that to create anything just yet, and the propagation of options/narrowing down the options is going to be a little bit intense.
So far I’ve been enjoying making the tileset so that’s good, may make some sort of at least placeholder character for now, or maybe go through and make some items or ui features for later.
After the WFC generates the array of possibilities I think it’ll be a matter of getting it to create the TileMap and then putting that for someone to see, then making it so you can walk around in there, have collision, etc.
Side Project progress
I decided to spend more time making art instead of getting the proc gen all figured out, so I also made some item art:
Basically went a little bit overboard making random items, trying to have a variety of variants for each item. I do get that I could just do palette swapping or other shader tactics to get different colors, but I decided not to do that for now, this way all color variants are strictly using the dawnbringer 32 palette.
I also did some additional outdoor tileset art, but not much new since last post.
Not much progress to report on the proc gen front this time, trying to wrap my head around how to actually do the logic, it seems like it will be much slowly to generate than the wang tiles just because there are a lot of arbitrary things like “find the cell in the tilemap with the least amount of possible tiles” and stuff like that, which look like it’ll be a mess of looping through things, which is just making the generation time look insane.
I may mitigate some of that by trying to store a list of all the cells that haven’t had a tile put in yet, at least to prevent having to loop through all XxY cells in the tilemap for every iteration through, and instead just pick from the ones remaining. I’m trying not to think of the big o of whatever the result I end up with will be.
I’ve tossed the idea around a little bit too of trying to learn Rust in some way, but not really wanting to add yet another side project, so I’ve been putting that off for now, but I may wind up pausing this side project for that side project, or similar, not really trying to write two engines, one in C++ and one in Rust, but maybe if I used the Rust engine to target DOS or something else like that? Who knows.
That’s all for now.
And back to the Engine!
Got back to doing UI work on my engine, labels and buttons are now possible to be put into windows. They do overlap in the screenshot here, but I’m not sure I care for now, it’s working!
And the button even works (just logging stuff for now when pressed, but still, that’s progress!)
However there are cases now where I’ve noticed some odd/unreliable segfaulting, some involving EnTT type assertions and such, so I’m a little bit concerned about the reliability/stability. Again. But progress has been made. And good progress at that, I mean what more do you need for an editor than buttons and labels? Okay Text Input is a good next thing, and then checkboxes and radio buttons are probably also going to be on the list, maybe sliders too. But baby steps for now.
Other side projects
In other news I’ve started a little Rust project with SDL2 just for like, learning Rust and such.
I’ve also started making a Pixel Font.
And I’m still doing pixel art for my Godot jrpg procedural map generation side project.
I think next steps are going to be
- Add text input boxes
- Add checkboxes
- add radio buttons
- try to fix up the reliability a little bit to make it more reliable
That’s all for now!
So I had my brother-in-law visiting for the past couple weeks, and he’s young and excited about game making, so I did what everyone does. started a new project with him to work on.
Just a godot project, don’t really have much on it to be honest, but maybe it’ll get there, that or it’ll just sit as another unfinished project.
Him being over distracted me from most of my other projects.
Engine progress has been pretty quiet.
I do however have quite the development at work, so I’m going to need to step up my note taking and organizing game. I prefer to handwrite my notes on paper, since that helps me both to remember what I wrote, and let’s me take notes off to the side while I’m presenting, sharing, or typing something else on my computer.
This does mean a couple things though:
- My notes aren’t easy to search or sort
- my handwriting is horrendous, so being able to read my own writing is sometimes difficult, or nigh impossible
- if I want to convert it to text, then most standard out of box OCR things I would expect to not work very well.
I did look into something I’d heard of a bit called Rocketbook, as this looks to be the least expensive type of “smart notebook” starting at only about $34, and you get a notebook with sheets that can be erased with water as long as you use the special pens.
The issue, is their app’s OCR involves sending the scans to their servers, which in my mind is kind of a no-go. I want something that I know doesn’t create any type of security/privacy issue with what I’m working on, and will let me potentially use the solution for both work and for non-work things.
So basically I started a new project to train some ML models to do OCR on my handwriting. I took a intro to machine learning/AI class a long long time ago, and I’ve dabbled about with tensorflow in a past so it’s not all super new to me, but OCR is a somewhat complex topic and has multiple moving parts, so I’m following along with a couple of tutorials.
The gist of it is that you have to do two things (typically with two different models)
- Detect where the text is
- this usually spits out a bounding rectangle of each chunk of text in the image so that it can send a cropped image of just the text to the second model
- recognize what each chunk of text actually is
- this is what does the actual handwriting
There are actually pre-trained models you can find to do both steps, in fact there’s already a project using said models on Android to do the recognition completely on a phone, however those models are trained on computer printed/generated text, not handwriting, so I doubt they will be very accurate.
So my plan is to basically find models to do those things, and then create some sample writing data myself, so that I can train the models on my own writing instead of someone else’s. This is good because sometimes my words turn into squiggles when going quickly, so if I can write crap out, and then type it out so I know what it says, then feed it to the model, in theory I should be able to get some semblance of a working OCR model off my own handwriting.
I would need to likely find and train a detection model, and a recognition model. I plan to do that on desktop using python + jupyter lab (an in browser python editor/tool which is fairly nifty for data processing and can display matplotlib graphs and charts inline with the code blocks)
After that I need to convert each model to a tensorflow lite model that can be loaded by an Android app to run the models on a phone, so I could basically make an app to take a picture of a piece of paper or notebook page, and then turn it all into text.
I also haven’t decided what should happen for the images, I do sometimes have diagrams, and formatting is also a fun challenge, so I may have the app take the images and save them separate from the text, and/or I might have the images get put into a PDF along with the OCRed text data.
For now I’m following along with this tutorial. it seems fairly in depth on how to do things with explaining the code as you go, and is an easy format to just copy/paste into a jupyter notebook. This is just the handwriting recognition portion, and not the detection portion though, so I will likely need to figure out how to locate the text in the picture/page with a different model.
I’ve been eyeing a model called EAST for that, it seems more aimed at detecting text in the world and not on a page, but really I think that will just improve the robustness of the model for what I want.
So I guess it’s off to collecting a few thousand word/writing samples from myself to train this puppy on, tune in next time to see how long this project lasted!