> the implementation is a full 32-bit multiply using 16-bit limbs> (note: the LCG constants listed here are wrong)
It is actually a 24-bit multiplier (a hint is that there are 3 multiplications) ant it computes:
word[RndVar]*word[RndA] + 65536*word[RndVar+2]*word[RndA] + 65536*word[RndA]*word[RndVar]
The third multiplication is identical to the first. This is a bug. word[RndA] should actually be word[RndA+2].
This is why a=0xfd43fd and not 0x343fd as documented.
There is another bug similar in the addition of c: ... + word[RndC] + 65536*byte[RndC]. byte[RndC] should be byte[RndC+2]. Hence c=0xc39ec3 instead of 0x269ec3.
I checked the original code and RandA and RandC are defined as 32-bit variables initialised with the documented constants.
That said, even the fixed version produces a similarly bad randogram and a quick check seems to confirm that removing the lowest-order byte produces better result.