Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Spectere

Pages: [1] 2 3 ... 14
1
Random Chat / The Lamest Topic Ever Conceived
« on: February 20, 2021, 05:48:42 PM »
I forgot to make a replacement for the "you couldn't ask for a lamer topic" thread, so here y'all go!

I really wish there was a 1-to-1 replacement for LINQPad on macOS/Linux. I've messed around with RoslynPad, but I always end up running into something that pushes me to fire up a Windows VM and use LINQPad instead. That's going to be a bit of an annoyance if I stick with MacBooks moving forward given the switch to ARM, though a lot can happen in a few years.

I might also see about contributing to RoslynPad after I push through a few of my personal projects.

2
Computing / Spectere's Random Programming Bullshit
« on: December 03, 2020, 06:16:59 PM »
Sometimes I get in these weird moods, where I just want to try implementing something different. Kind of a nice way to learn new things and to keep myself sharp. Usually, these are little one-offs, like when I decided to implement a Wolfenstein 3D-style raycaster. This thread is dedicated to those one-offs.

One of my more recent one-offs has been a little expression parser that I've been writing in C#, currently targeting .NET 5.0 (originally .NET Core 3.1). I'm still derping around with it, but it combines a number of concepts and things together and is shaping up to be fairly nifty.

I did have a few objectives in mind when I approached this. In addition to implementing the parser, I wanted it to be able to access variables and API calls. Essentially, the variables would be a key-value pair that the expression engine could read from and write values to, and the API would be implemented by passing it a class with methods, properties, and fields. Properties and fields in the API class can be modified using an expression, and methods can be called. Since I don't want types to be a concern, some sort of dynamic typing support will be required.

Since typing is one of the core factors, I went ahead and resurrected a class I wrote a while ago to handle dynamic types, then modernized it to use the most current language features (pattern matching and switch expressions helped to declutter the code quite a bit). This class contains an internal object and dynamically converts it between various types depending on what it contains. When a value is created, it will attempt to coerce the type to one of the following, in order: long, decimal, double, string. The reason for this is to ensure that the number is stored with full precision (which long and decimal are far more adept at doing) before using an approximate type.

This class also has overloads for all of the assignment and comparison operators, which all handle converting the two sides to like types, performing the operation, and returning a new value type with the result. This also includes provisions for string comparisons and concatenations, and will throw an exception if two incompatible types are used (such as subtracting a string from a long). When all is said and done, stuff like this becomes possible:

Code: [Select]
var a = new Value("1234");
var b = new Value(0.567m);
var c = a + b;  // c = 1234.567

Note that a was initialized using a string, while b was initialized using a decimal, yet the addition was still successful and returned the expected result (as opposed to "12340.567", which would have been the result if string "addition"—that is, concatenation—were used). We also put in a bunch of different implicit conversion overloads, allowing it to return the desired type and converting it if necessary/possible.

The next step is implementing the parser itself. This was done by tokenizing the string (that is, reading each symbol and assigning a type to it). For example, this would translate this:

Code: [Select]
_a := (4 + -5) * 2
…into this:

Code: [Select]
Identifier (_a)
Assignment
OpenParens
Value (4)
Addition
Subtraction
Value (5)
CloseParens
Multiply
Value (2)

The expression engine then takes that tokenized collection and evaluates it. It does this by breaking down the tokenized expression into a tree, taking operator precedence into account. It does this until it descends into leaf nodes (nodes with no children, such as values and identifiers), then starts working its way up, evaluating the binary and unary nodes until it arrives upon a final result. In the above example, the tree would look something like this:

Code: [Select]
root: NodeBinary(Assignment)
    left: NodeLeaf(Identifier: _a)
    right: NodeBinary(Multiplication)
        left: NodeParenthesis(4 + -5)
            value: NodeBinary(Addition)
                left: NodeLeaf(Value: 4)
                right: NodeUnary(Negation)
                    value: 5
        right: NodeLeaf(Value: 2)

5 has a negation unary operation applied, giving us -5. Then, 4 is added to the resulting -5, giving us -1. That value is multiplied by 2, giving us -2. That value is then assigned to the identifier _a.

Since the value type allows us to handle all types more or less equally, it also lets you do some wonky stuff. The following concatenates -16 and 45, and 23 and 52 together as strings, then adds the result together:

Code: [Select]
(-16 & 45) + (23 & 52)
-1645 + 2353
707

Okay, great. So we have a working expression parser. Next step is to implement variables and API calls. Identifier nodes are going to handle both of them, with the mnemonic that simple variables are prefixed with an underscore and API calls handle…well, basically everything else. We also want everything to be case-insensitive, so we need to make sure to take that into account.

Variables are simple, so let's start with those. We're going to treat them as key/value pairs, so we can just use a Dictionary<string, Value> and that'll do the trick. We'll handle all of that in the API context handler. If we see an underscore prefix, we treat it as a variable, otherwise we treat it like an API call. Both both getting and setting variables it's a simple matter of transforming the key name to lowercase for case-insensitivity. For gets, we check to see if the key exists, returning null if it doesn't or the value if it does. For sets, we add a new entry to the Dictionary if the key doesn't exist and update the existing entry if it does.

As far as APIs are concerned, we want to write this in such a way that it'll work regardless of what the class looks like, so we'll need to use reflection. Since we support methods, properties, and fields, and all three of them are handled slightly differently, we need to be able to accommodate all three. First, we search for the member using a combination of reflection and a simple LINQ expression:

Code: [Select]
var child = apiObject.GetType().GetMembers().FirstOrDefault(e => string.Equals(e.Name, objectName, StringComparison.InvariantCultureIgnoreCase))
This gives us a case-insensitive match to the first object containing that name. Since we're using an "OrDefault" query, we do a quick null check to make sure that it exists, throwing an exception if it doesn't. We can then check the MemberType property of the returned value. Since we're only concerned with fields, methods, and properties, we search for those.

Fields and properties are simple. All we have to do is cast those to FieldInfo or PropertyInfo, respectively, and call GetValue(). Return that to the parser and there we go. Methods are a bit more complicated, especially if you want to take optional parameters and polymorphism into account. We don't really care about that right now, so all we do is take whatever parameters the expression parser picked up (basically, by setting up a little rule where values in parens following identifiers are treated as parameters), convert it into an object array, cast the MemberInfo class to MethodInfo, then return the results of the Invoke method.

There is one little problem, however. We want to be able to write API functions like a normal .NET function, without necessarily having to use the Value types for parameters and the return value. The problem with this is that .NET's default binding wasn't really intended to handle something like the Value type. Since Values can be converted seamlessly to just about any type, we need to write a custom binder to ensure that conversion will work smoothly.

We do this by creating a ValueBinder class, which extends the .NET Binder class. The only method that we care about overriding is the ChangeType() method. All we have to do here is check the requested type and explicitly point it to the appropriate Value conversion method. The following works nicely:

Code: [Select]
public override object ChangeType(object value, Type type, CultureInfo culture) {
    var val = value as Value;
    if(type == typeof(Value))  // No conversion necessary.
        return value;

    if(val is null) return null;

    return Type.GetTypeCode(type) switch {
        TypeCode.Boolean => val.ToBool(),
        TypeCode.Decimal => val.ToDecimal(),
        TypeCode.Double => val.ToDouble(),
        TypeCode.Int32 => val.ToInt32(),
        TypeCode.Int64 => val.ToInt64(),
        TypeCode.Single => val.ToSingle(),
        TypeCode.String => val.ToString(),
        _ => val.ToObject()
    };
}

Now, all we need to do is pass an instance of ValueBinder to the Invoke call using the default binding flags, and now it'll work far more reliably with functions that use built-in types.

And, with that, everything is pretty much done. My test fixtures gave me a whole bunch of green checkmarks, so I decided to give it a quick stress test, if you will: the quadratic formula. Nothing too crazy, sure, but it's a pretty decent all around test.

First of all, we'll need a square root function:

Code: [Select]
private class MathApi {
    public static Value sqrt(Value x) => Math.Sqrt(x);
}

Next, we write a quick test fixture:

Code: [Select]
[Test, Repeat(100)]
public void QuadraticFormula() {
    var vars = new Dictionary<string, Value>();
    var api = new MathApi();
    var ctx = new ApiContext(vars, api);

    var a = _rng.Next(1, 4);
    var b = _rng.Next(20, 40);
    var c = _rng.Next(1, 4);

    vars.Add("a", a);
    vars.Add("b", b);
    vars.Add("c", c);

    const string expr1 = "(-_b + sqrt(_b^2 - (4 * _a * _c))) / (2 * _a)";
    const string expr2 = "(-_b - sqrt(_b^2 - (4 * _a * _c))) / (2 * _a)";

    var expectedX1 = new Value((-b + Math.Sqrt(Math.Pow(b, 2) - (4 * a * c))) / (2 * a));
    var expectedX2 = new Value((-b - Math.Sqrt(Math.Pow(b, 2) - (4 * a * c))) / (2 * a));

    var actualX1 = new Parser(expr1).Eval(ctx);
    var actualX2 = new Parser(expr2).Eval(ctx);

    var message = $"a = {a}, b = {b}, c = {c}\n"
                + $"X1 => {expr1} = {actualX1} "
                + (expectedX1 == actualX1 ? "" : "(!!!)") + "\n"
                + $"X2 => {expr2} = {actualX2}"
                + (expectedX2 == actualX2 ? "" : "(!!!)") + "\n";

    Assert.AreEqual(expectedX1, actualX1, message);
    Assert.AreEqual(expectedX2, actualX2, message);
    Console.WriteLine(message);
}

And then we run it (output trimmed, since 100 iterations is a bit much):

Code: [Select]
a = 2, b = 37, c = 2
X1 => (-_b + sqrt(_b^2 - (4 * _a * _c))) / (2 * _a) = -0.05421292112523979
X2 => (-_b - sqrt(_b^2 - (4 * _a * _c))) / (2 * _a) = -18.44578707887476

a = 1, b = 24, c = 2
X1 => (-_b + sqrt(_b^2 - (4 * _a * _c))) / (2 * _a) = -0.0836247121870155
X2 => (-_b - sqrt(_b^2 - (4 * _a * _c))) / (2 * _a) = -23.916375287812983

a = 2, b = 33, c = 3
X1 => (-_b + sqrt(_b^2 - (4 * _a * _c))) / (2 * _a) = -0.09141556395963946
X2 => (-_b - sqrt(_b^2 - (4 * _a * _c))) / (2 * _a) = -16.40858443604036

Ding ding!

Now, for a quick bonus. Let's go ahead and work out the expression tree for one of those examples:

Code: [Select]
(-_b + sqrt(_b^2 - (4 * _a * _c))) / (2 * _a)

root: NodeBinary (Division)
    left: NodeParens(-_b + sqrt(_b^2 - (4 * _a * _c)))
        child: NodeBinary (Addition)
            left: NodeUnary (Negation)
                value: NodeLeaf (Identifier: _b)
            right: NodeLeaf (Identifier: sqrt)
                params: NodeParens (_b^2 - (4 * _a * _c))
                    child: NodeBinary (Subtraction)
                        left: NodeBinary (Power)
                            left: NodeLeaf (Identifier: _b)
                            right: NodeLeaf (Value: 2)
                        right: NodeParens (4 * _a * _c)
                            child: NodeBinary (Multiply)
                                left: NodeBinary (Multiply)
                                    left: NodeLeaf (Value: 4)
                                    right: NodeLeaf (Identifier: _a)
                                right: NodeLeaf (Identifier: _c)
    right: NodeParens(2 * _a)
        child: NodeBinary(Multiply)
            left: NodeLeaf (Value: 2)
            right: NodeLeaf (Identifier: _a)

And there you have it! Wasn't that fun? :D

3
Gaming / GameBoy Emulation
« on: August 09, 2020, 02:20:56 PM »
So after a whole hell of a lot of research and development, I managed to emulate enough of a GameBoy to properly run the DMG boot ROM (it actually gets far enough to swap out the boot ROM for the first 0x100 bytes of the game cart, then start executing code from the game ROM, but the emulation isn't complete enough to run anything useful right now):



I still have plenty of work to do, but I think I'm off to a decent start. :)

Edit: I ended up doing a ton of improvements on this. After I got it to this point the emulator was unable to run at full speed on a mobile i9. No, I'm not joking. I didn't think to get an exact figure on how long one "second" of emulation time took in real-time, and my frontend doesn't support frame skipping, so it was effectively running at half speed.

It's kind of amazing how a few microseconds quickly start to add up when you're literally simulating five million clock ticks per second (CPU runs at ~1.05MHz, PPU dot clock is ~4.19MHz). I believe that my CPU core is cycle-accurate, and I'm aiming for cycle-accuracy on the PPU as well (to a point, anyway; some of the specific timings involved with the video generation phase%u2014PPU mode 3%u2014are unclear).

While there were a few little microoptimizations that probably did more for debug builds than release builds (flipping from range-based for loops to index-based, converting some if/else blocks to switch blocks, etc), the most impactful changes occurred on the memory mapper.

Basically, Plip was designed to be more of an emulation interface than a single emulator. It doesn't split off the cores as separate libraries, ala RetroArch (though it probably could, honestly), but it's conceptually similar.

One of the things that I did was generalize memory access. It has a memory mapper, and that memory mapper takes PlipMemory objects (that being a pure virtual class, with RAM and ROM implementations). The core then assigns those memory blocks to specific addresses, and when you want to access it you tell the mapper to fetch a byte and it'll handle all the hard work for you. For instance, if you have a ROM at 0x0000-0x3FFF, system RAM at 0x4000-0x5FFF, and video RAM at 0x6000-0x7FFF and you request a byte from 0x4800, it'll check the mapping table, find that the requested byte lives in the system RAM instance, then do some fancy math and return 0x0800 in system RAM.

This system is nice because it inherently supports banked ROM and RAM. All I have to do is update the offset of a block and I'm suddenly in a new bank. Additionally, this ended up being useful for a little GameBoy quirk, known as ECHO RAM (due to how the DMG's memory controller works, 0xC000-0xDDFF is mirrored to 0xE000-0xFDFF). All I had to do to simulate that was simply add the work RAM block to the upper address and it Just Worked%u2122.

Now, there is a pretty substantial problem with this that I hinted at earlier: that find routine costs CPU cycles, and doing it unnecessarily is a huge problem. Obviously, the CPU needs to use the mapper for all of its memory access, since it doesn't really know any details about the memory layout. This isn't a huge problem, seeing as the CPU only hits memory once per cycle at the most, since the GB's CPU cannot read or write memory more than once per cycle. The problem lies in the PPU. Not only is the dot clock four times faster than the CPU, but the PPU also has a bunch of registers that it needs to both read and update in order to display the image. This results in the function being called millions of times per second.

The fix for this was simple: since the PPU is only reading and writing certain specified registers, I can easily get away with directly accessing the PlipMemory objects declared in the core (m_videoRam, m_oam, and m_ioRegisters, in my case). I just handled all of the arithmetic to get the appropriate addresses in the various "static const" declarations. Easy peasy.

Even with that, it still wasn't fast enough, and there's a good reason why: because I needed a data structure that allows quick and easy inserts, I used std::list, STL's doubly-linked list implementation. Now, linked lists are fast, but since they are disparate objects tied together with pointers the compiler can't simply say "oh, it's just X address, plus the index times the size". It has to follow the trail of pointers, and this makes iteration significantly slower since it can't be easily cached, and the location of the data can't be predicted. Since the memory mapper supports all sorts of fancy features like being able to smash a block of memory on top of another one, I didn't want to abandon std::list altogether because it made everything so clean and easy. Fortunately, assigning blocks of memory is done relatively rarely, so I changed it to do block assignments against an std::list and building a far more efficient std::vector (which is basically a managed contiguous array) with the final contents of the list after it's built. Basically, trading in a minuscule amount of CPU time and memory to save a ton of cycles in the long run.

And finally, there's a matter of the return type of FindAddress. I was using std::tuple<PlipMemory*, uint32_t> (the pointer to the memory object and the memory address offset relative to that block). I ended up replacing that with a struct, which ended up reducing the overhead of that function quite a bit. I think std::pair<> would have been a safe bet as well. I might test that at some point. From what I understand the difference becomes moot on optimized builds, but this is one of those situations where micro-optimizations are actually useful in order to make the debugging process less awful.

I'm having a ton of fun with this project, in case it wasn't obvious. ;P

4
Random Chat / The Thread of Excessive Rage
« on: July 18, 2020, 01:06:12 AM »
So I realize that I made somewhat of a tactical blunder when I locked the existing happy/anger threads. I figured that it would encourage more long-form discussion through the creation of new threads, but by in large all it's done is remove a means of minor celebration or venting. While there are some things that have worldwide relevance (hello there, COVID-19) and are more than deserving of their own thread, things like my FedEx escapades really don't need more than a couple of posts to get the point across.

This is where you people rant about your fleeting moments of rage. It's sort of like the old thread, except…FUCK, there's nothing different at all about it, is there? Dammit!

5
Random Chat / The Thread of Extreme Happiness
« on: July 18, 2020, 01:04:24 AM »
So I realize that I made somewhat of a tactical blunder when I locked the existing happy/anger threads. I figured that it would encourage more long-form discussion through the creation of new threads, but by in large all it's done is remove a means of minor celebration or venting. While there are some things that have worldwide relevance (hello there, COVID-19) and are more than deserving of their own thread, things like my FedEx escapades really don't need more than a couple of posts to get the point across.

Here, I wish to present you lovely folks with a thread to post your happy thoughts, just like what we had before except…okay, there's nothing different about this one whatsoever. Enjoy!

6
News / "Contact Information" Thread Removed
« on: July 15, 2020, 06:01:35 PM »
I ended up getting an information deletion request for the Contact Information thread today and ended up removing the whole thing outright. Much of the information in that thread was from inactive members, and much of it was private enough that I wasn't comfortable hosting it (like phone numbers, not to mention deadnames). I sent requests to have the information purged from Google and Bing's cache as well.

I highly recommend using the forum profile fields instead. I've added fields for Nintendo Switch friend codes and Discord IDs, so that should basically bring us up to 2020. :) Let me know if there's anything we're missing and I'll add the appropriate fields.

Edit: I have just confirmed that Google and Bing have removed the cached pages.

7
Computing / Your Lair
« on: June 30, 2020, 09:29:17 AM »
Post pictures of your computing/gaming space here!

After using a recliner/TV setup for a while I decided to switch back to a desk, partially because I was getting sick at tired of my TVs not playing well with my PCs and mostly because I wanted an adjustable desk. So…I ended up with this (click for full size):



The desk is a VertDesk v3 with all sorts of fun little addons (programmable switch, monitor arm, cable management box and drag chain) and the anti-fatigue mat is an Imprint Commercial Couture Strata. I also ordered a pair of Acer Predator XB271HU monitors. I had the sound system for a while, but if you're curious it's a Sony STR-DH740 receiver, a pair of Infinity Primus P163 speakers, and (out of frame) an Infinity PS312 subwoofer.

I also have a CalDigit USB-C laptop dock (not pictured) that I use for both my personal and work laptops, as well as a switchable USB hub that I can use to easily shove my keyboard and mouse from my tower to the laptop dock.

All in all, it's been working quite well. Aside from the added general comfort of this setup, 144hz gaming has been downright incredible so far. I was pleasantly surprised that Doom 2016 was able to run at a locked 144hz on 1440p Ultra on my system, so naturally I'm kind of curious to see how Doom Eternal is going to fare (it was able to run at 4K60, maxed out, so I'm hoping for a consistent ~90-100fps at 1440p).

Edit: Tried it out tonight and it's more like 120-140fps on average, on Ultra Nightmare settings. I'll take it!

8
Random Chat / Why FedEx is the Worst Delivery Service
« on: June 19, 2020, 09:49:54 AM »
I have a package that's arriving to me via FedEx, and it's already turning into a frustrating experience, so I figured I'd vent a bit about why I feel that FedEx is the absolute worst delivery service that I've ever had the displeasure of using.

So, in this particular incident, I have a package coming to me from northern Wisconsin, roughly 10 hours away. Six parts, with an estimated delivery date of Saturday, June 20 for four parts and Sunday, June 21 for two. Kind of weird, but considering the total weight is 200 lbs (90 kg) I can understand. The package in question is a desk, so I explicitly set aside time this weekend to assemble it. Perfect, I'll be able to have it together in time to start work on Monday.

I wake up to an update this morning. The packages are now in Illinois, but the estimated delivery date for two of the parts due on Saturday have been pushed back to Monday, a day where my schedule is jam packed. All of the other parts remain unchanged, so it's only those two parts that got delayed. Okay, maybe it's a non-critical part and I can still assemble most of the desk. Let's just check the tracking information and check the weight of those two parts.

Oh, it's the two heaviest parts. In other words, the base and top. Fuck.

Edit: Just got an update. Now everything is supposed to get here on Monday. Here's an idea, FedEx: stop fucking lying to make yourselves look better and start giving proper estimates.

"Oh, but COVID-19! Oh, unexpected circumstances! Oh, but blah blah blah." No, fuck that. FedEx has a very long history of doing this kind of shit. I've gotten many, many packages delivered to me over the years (via USPS, UPS, FedEx, DHL, and Amazon), and it seems like every time I get something via FedEx they end up surprising me with just how awful they are. Let's get into a few examples.

First of all, they never estimate high and exceed expectations. They will "estimate" delivery dates that are beyond their abilities and then repeatedly push them back to keep them "accurate." There was one time that I ordered a package shipped via 2 day shipping and the delivery date was pushed back 5 times. The only time UPS has ever been late for a delivery was due to a goddamn winter storm.

Don't expect them to deliver early in the rare event that they do overestimate, though. It's really rare that they will. I ordered a package from an adjacent state (I think Indiana) that for some reason had an estimated 5 day delivery window. The package made it to Ohio within a day, and proceeded to sit in a warehouse, 45 minutes from my house, right up until the "estimated" delivery time. To add insult to injury, the package tracking said something like "package not due" as the current status.

At one point, they pulled the stereotypical cable guy ploy. When I ordered my Dell in 2016, somehow, despite three people being home at the time, we found that no delivery attempt was made. We checked our security camera and found that the delivery person simply walked up to the front door with a "while you were out" note, put it on the door, then left. Nice.

I've had a couple instances where two boxes that left from the same facility at the same time ended up two wildly different. The most notable one involved a package from Virginia, where one of them went straight from point A to point B while the other one ended up in Chicago (what?!) and ended up having an additional two days tacked onto its transit time.

My dad ordered something from California once via FedEx. A couple of days later, I ordered something from Pragua, Czechia via DHL. The international  package arrived in three days, including time spent in customs. The package from FedEx took eight business days. It's worth noting that USPS can get packages from California to Ohio in 3-4 days via ground, tops. I've had packages from Taiwan and China (which do tend to get held up in customs) arrive faster than that.

My company once relied on FedEx's same day delivery, to replace a failing server hard drive on our core file server. It took three days for the package to arrive.

The worst part is that I didn't have to rack my brain for any of this. I use FedEx as little as humanly possible yet I was able to come up with these off the top of my head. If you asked me what sort of negative experiences I had with other delivery services I'd have to actually sit down and think about it, but issues with FedEx just roll right off the tongue. Fucking terrible company, and I wish they weren't too big to fail.

9
Computing / The Keeb Thread
« on: June 14, 2020, 06:23:29 PM »
Post your keyboard here!

Here's the board I'm currently using for my main PC (click to enlarge, as always):



It's a GMMK TKL with 67g Purple Zealio V2 switches and Drop + Matt3o /dev/tty keycaps.

I love almost everything about this thing. The keycaps have a spherical cut rather than a cylindrical one, so they almost cradle your fingertips while you type. F and J use a deeper groove instead of a nub or other marker, so it's just as easy to tell if your fingers are in the home position. Beyond that, they're thick PBT caps, so the texture is going to last a long time (my vintage 1989 Model M's keycaps still have their texture!) and they're noticeably quieter than the stock Pudding caps that the GMMK comes with.

The Zealios have a nice and heavy tactile bump, somewhat like a crisper MX Clear, with the actuation point immediately after the bump. This is a pretty big improvement over the Kailh Box Browns I ordered with the keyboard, which have an actuation point that's roughly 0.5mm below the bump.

I have a bunch of Holy Pandas on order as well, so we'll see how they compare. I often see the Zealio V2 compared with those switches, so I'm eager to compare them side by side.

As for the "almost" bit, the main points of criticism that I have is that everything about the way that the GMMK's company markets everything is beyond cringe. For one, the keyboard is called the "Glorious Modular Mechanical Keyboard," and the company's entire image is based on Yahtzee's "PC master race" joke. Yes. There's seriously a company whose entire schtick is based on r/pcmasterrace. I hate it.

The second downside is that the software is Windows only and is kinda crap. Their macro support is as basic as it gets, with you simply being able to just redefine keys, with no option of being able to use Fn+X chords to trigger macros (uhhh, why not just use AutoHotKey at that point?). I can't imagine their compact keyboards even being usable, seeing as they don't even offer customizable layers. Naturally, their controller doesn't support QMK or other open source firmware, so you're stuck with what you get.

That said, the build quality is nice, and the Kailh hotswap sockets have proven to be very nice indeed.

10
Song of the Day / [D&B] Raizer - Explode
« on: May 27, 2020, 08:57:01 AM »
A few years ago I posted about a band that I randomly stumbled upon called Raizer. Last week I randomly checked in on their artist page on Apple Music and noticed that they'd released a bunch of singles after their debut album dropped that I hadn't listened to yet.

They're all pretty great, but this one stuck out.



Fast, aggressive, with a really awesome chorus tying everything together. This song's been on my regular rotation since I first heard it.

11
Gaming / The Outer Worlds
« on: October 28, 2019, 02:28:38 AM »
I heard a lot of people describe this as "Fallout 4 in Space," mostly because Obsidian did New Vegas, yadda yadda.

See, that's not true at all. Unlike Fallout 4, this game is actually good.

*troll face*

But yeah, I'm about five hours in right now and just got off the first planet. The game definitely does resemble titles from Bethesda's stable, but instead of being based on Bethesda's mutilated version of the Gamebryo engine it's based on Unreal Engine 4. Rock solid so far and plays well on my 1080 Ti (ultra at 4K, though I did bump the render scale down to 80% to keep the framerate locked at 60; could probably tweak it further to squeeze some more pixels out of it but I don't see the need to). I had one minor issue where the sounds would cut off prematurely and eventually stop playing altogether around the 2h30m mark, but a quick restart (and I do mean quick—this game runs really well) resolved the issue. There is some minor object pop-in, but how noticeable it ends up being seems to be pretty variable.

My gut feeling is telling me that you're not going to see same level of expansiveness that you do in Bethesda titles after seeing how the first area is handled, but I'd much rather have a handful of rich, densely packed 750m² (rough guess based on distances walked) areas than a relatively sparse 16mi² one.

But anyway, it's getting late so I'm just going to finish this off real quick. First impressions are very positive. If it turns into a complete dumpster fire another hour in I'm sure I'll bitch about it, but so far I've been having a ton of fun with it.

12
Random Chat / New car!
« on: October 21, 2019, 09:46:38 PM »
So I ended up getting my first brand spanking new, straight off the showroom, 14 miles on the clock car tonight.


2019 Honda Civic EX Sedan. Has a few features! I like it!

13
Gaming / Super Stigma Slam - 2019
« on: October 01, 2019, 09:59:22 PM »
I think I mentioned this elsewhere, but this weekend I'm going to be taking part in a gaming marathon benefiting Take This, a charity focused on educating the public about mental health issues and reducing the stigma of mental illness.

Here's a linky to the specific marathon I'm taking part in: https://www.takethis.org/

I'm going to be doing an easy run of Quake, beating Ganon in Hyrule Warriors, and playing Doom II on UV. Here's a rundown of when I'm going to be up:

Friday, October 4, 11:00PM CDT: Quake
Saturday, October 5, 1:15PM CDT: Hyrule Warriors
Saturday, October 5, 10:15PM CDT: Doom

Here's a link to the full schedule: https://horaro.org/super-stigma-slam-2019/main (the times there should automatically adjust to match your time zone).

Please tune in if you have a chance. ;D

Edit: Should have probably added a link to the web site (which has the Twitch streamy thingie embedded). Here ya go: https://superstigmaslam.com/

14
Gaming / wut specturr'z playing
« on: January 24, 2019, 10:28:20 AM »
Kind of a personal spinoff to vlad's gaming threads.

Current 2019 goals:

1. Focus on Doom tricks. I've recently managed to pull off a series of both guided and guideless glides while playing casually (MAKE SENSE OF THAT), so I might as well keep at it and start lowering my time. I'm not aiming for WR, but some nice PBs would be pretty fantastic.

2. Hyrule Warriors grinding. I'd like to get as many characters to at least level 100 as possible. Link is already there, with his closest runners (Zelda, Skull Kid, Impa, Linkle) sitting in the mid-70s to low-80s. I'd also like to clear the first four adventure maps (the first one is already done, so that leaves the Great Sea, as well as the Master Quest versions of the classic and Great Sea maps).

3. Horizon: Zero Dawn. I bought the definitive edition cheap when I picked up my PS4 Pro but I have yet to play it. That needs to change.

4. (added 2019-02-13) Rebel Galaxy.. Had a ton of fun with this a couple years ago and just never picked it up again. Whoops.

15
News / Admin approval is now required for all account registrations
« on: January 07, 2019, 10:05:08 AM »
I honestly never thought the spam situation would get bad enough for things to come to this, but I've had it with this bullshit.

It doesn't help that SMF's development is pretty much glacial at this point, to say nothing for the plugin ecosystem, so trying to combat this by tweaking the settings or installing an addon has been pretty much futile. The methods that I used to use no longer install cleanly into 2.0.15 and I was pretty much over manually installing mods all the way back in the arch0wl.com days, so this is about the best "compromise" I could come up with.

All in all, I suspect it's going to be way easier to mass-delete inactive accounts every few days than deal with "hot fortnite xxx cock pussy creampie viagra" threads popping up daily, so that's the route I've decided to take. ¯\_(ツ)_/¯

Pages: [1] 2 3 ... 14