Salute,
And please stop saying the natural languages are shit! Because they are
simply not.
You're not special, and these words that you use are not magic, but very
specific things that may or may not refer to a set of other very things,
sensical or not they may be.
Language models are short of voodoo.
You can think of one as being an encoding, effectively– it's a computer
program that can act on information by encoding and decoding it back and
forth into the machine-understandable form using this little thing called
_embedding_ (a simple mathematical object) to bring maths to the table,
yeah?
'king' - 'man' + 'woman' ≈ 'queen'
Imagine you could do maths on words? Well, it turns out you can. This is
what machine learning in this field is all about. How logos fits into
all of this is interesting, but involved.
In fact, you can do a lot of algebraic damage to these embeddings, to
the point it's unethical how you get to exploit such simple mathematical
things to your own personal advantage.
You should do yourself a favour and go read one of the many "Terms and
conditions" sheets and soon enough you should see, and experience– for
yourself the marasmatic verbosity of the language used in jurisprudence,
the lawyer talk.
They truly believe they're special. But nothing they do is special; it's
part analysis, part good old bookkeping. All jokes aside, lawyers is got
to worry if not too much, then at least hard enough— so they can beat it
with analysis and truly outplay the computers at the rule-following game
for which they were designed precisely.
My opinion: You must be dumb to think people can excel here.
Where we can excel, however, is in operational capacity. We people have
broad understanding of the real world, and even though in some ways we
are all very similar, the ways in which we are different truly matter.
Eyes on the prize.
Whoever models discourse on modern computer infrastracture takes it.
-badt
Interesting! Could you please explain to us featherbrained plebeians,
lacking this arcane gnosis that is computer programming, how you see
logos going about non-declarative propositions and some of the more
subtle and troublesome domains of natural languages concerning
semantics and pragmatics? I'm positive I can't be the only one who
wants to hear about how logos ties in with all this talk of NLP and
computational linguistics!
Hey,
The computer programming bit is totally uninteresting.
At far as natural language processing goes, it has much more to do with
maths, and linear algebra in particular. Most computer programmers have
no idea this is possible and to what extent it will become possible in
the nearest future, despite literally working in the industry for many
years!
That's where I find a striking similarity between the French bourgeoisie
and the programming community, oh, programming language design community
in particular—
They both ultimately lack self-awareness.
In the west, the 'right of blood' idea is slowly, but steadily dying off
and although this may be more of a pressing question in most if not all
Asian, African, and Arabic countries, in the Western scheme of things it
is slowly beginning to mean shit.
And that's the very scheme in which I operate.
Of course they're going to say that natural languages cannot be used to
create, run, and repair computer systems; the alternative is simply too
damn daunting.
Try and imagine not having the exclusive right to computation.
The world in which plebs have some exquisite understaning of computers,
imagine that; when plebs can create and repair their own sites— this is
what truly scares programmers to the tits!
>how you see logos going about non-declarative propositions
For proposition to be meaningful, it has to declare something about the
world; this is the fundamental property of propositions. So you see, in
my view all meaningful propositions, or at the very least the useful
ones— all, in a way declarative.
Now I'm aware there are papers such as [1] in which the declarative is
kind of opposed to assertoric, differentiating all-between questions,
assertions, and commands.
You can imagine logos to be 100% declarative in form; each and every
assertion uttered in the realm of logos is only asserted insofar the
identity of the speaker is considered.
Everyone speaks for themselves.
>concerning semantics and pragmatics?
Please expand.
Semantics mean shit as long as the computer is able to correctly derive
the meaning of what's being said. That's partly reason as to why GPT-3,
a language model of very limited semantics, is able to consistently
produce human-readable and human-reasonable texts. The trick is to feed
it enough data so it can first derive the principles of rule-following
inherent to your data (GPT is trained using billions of texts available
online) and then remember how the rules are meant to be applied.
There's nothing pragmatic about that, it's optimistic at best.
You can't be as naive as to expect a free-wheeling, black circus box
algebraic voodoo kind of thing to produce 100% consistent and meaningful
propositions all the time, right? The error is inevitable; there's just
too much inconsistent nonsense in the training set.
Good news is you can account for nonsense, and in fact that's precisely
the approach I took with logos. Instead of trying die-hard to eliminate
all nonsensical, you simply embrace it, and use the redundancy it brings
to your own advantage.
All redundant can be encoded efficiently.
-badt
[1] https://periodicos.ufsc.br/index.php/principia/article/view/14704