~seirdy/public-inbox

floss-security: fix typo v1 PROPOSED

Pranjal Kole: 1
 floss-security: fix typo

 2 files changed, 4 insertions(+), 4 deletions(-)
Export patchset (mbox)
How do I use this?

Copy & paste the following snippet into your terminal to import this patchset into git:

curl -s https://lists.sr.ht/~seirdy/public-inbox/patches/29240/mbox | git am -3
Learn more about email & git

[PATCH] floss-security: fix typo Export this patch

"ME not without" -> "ME is not without"

Also added a missing trailing dot.
---
 content/posts/floss-security.gmi | 4 ++--
 content/posts/floss-security.md  | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/content/posts/floss-security.gmi b/content/posts/floss-security.gmi
index 8113397..255cd70 100644
--- a/content/posts/floss-security.gmi
+++ b/content/posts/floss-security.gmi
@@ -143,7 +143,7 @@ Understanding the inner workings of the obfuscated components blurs the line bet

Skochinsky's and Corna's analysis was sufficient to clarify (but not completely contradict) sensationalism claiming that ME can remotely lock any PC (it was a former opt-in feature), can spy on anything the user does (they clarified that access is limited to unblocked parts of the host memory and the integrated GPU, but doesn't include e.g. the framebuffer), etc.

While claims such as "ME is a black box that can do anything" are misleading, ME not without its share of vulnerabilities. My favorite look at its issues is a presentation by Mark Ermolov and Maxim Goryachy at Black Hat Europe 2017:
While claims such as "ME is a black box that can do anything" are misleading, ME is not without its share of vulnerabilities. My favorite look at its issues is a presentation by Mark Ermolov and Maxim Goryachy at Black Hat Europe 2017:

=> https://papers.put.as/papers/firmware/2017/eu-17-Goryachy-How-To-Hack-A-Turned-Off-Computer-Or-Running-Unsigned-Code-In-Intel-Management-Engine.pdf How to Hack a Turned-Off Computer, or Running Unsigned Code in Intel Management Engine.

@@ -200,7 +200,7 @@ I readily concede to several points in favor of source availability from a secur
* Closed-source software may or may not have builds available that include sanitizers and debug symbols.
* Although fuzzing release binaries is possible, fuzzing is much easier to do when source code is available. Vendors of proprietary software seldom release special fuzz-friendly builds, and filtering out false-positives can be quite tedious without understanding high-level design.
* It is certainly possible to notice a vulnerability in source code. Excluding low-hanging fruit typically caught by static code analysis and peer review, it’s not the main way most vulnerabilities are found nowadays (thanks to X_Cli for reminding me about what source analysis does accomplish).
* Software as a Service can be incredibly difficult to analyze, as we typically have little more than the ability to query a server. Servers don't send core dumps, server-side binaries, or trace logs for analysis. Furthermore, it's difficult to verify which software a server is running.¹⁴ For services that require trusting a server, access to the server-side software is important from both a security and a user-freedom perspective
* Software as a Service can be incredibly difficult to analyze, as we typically have little more than the ability to query a server. Servers don't send core dumps, server-side binaries, or trace logs for analysis. Furthermore, it's difficult to verify which software a server is running.¹⁴ For services that require trusting a server, access to the server-side software is important from both a security and a user-freedom perspective.

Most of this post is written with the assumption that binaries are inspectable and traceable. Binary obfuscation and some forms of content protection/DRM violate this assumption and actually do make analysis more difficult.

diff --git a/content/posts/floss-security.md b/content/posts/floss-security.md
index ca6cd96..4829bb5 100644
--- a/content/posts/floss-security.md
+++ b/content/posts/floss-security.md
@@ -122,7 +122,7 @@ Unfortunately, some components are poorly understood due to being obfuscated usi

Skochinsky's and Corna's analysis was sufficient to clarify (but not completely contradict) sensationalism claiming that ME can remotely lock any PC (it was a former opt-in feature), can spy on anything the user does (they clarified that access is limited to unblocked parts of the host memory and the integrated GPU, but doesn't include e.g. the framebuffer), etc.

While claims such as "ME is a black box that can do anything" are misleading, ME not without its share of vulnerabilities. My favorite look at its issues is a presentation by Mark Ermolov and Maxim Goryachy at Black Hat Europe 2017: [How to Hack a Turned-Off Computer, or Running Unsigned Code in Intel Management Engine](https://papers.put.as/papers/firmware/2017/eu-17-Goryachy-How-To-Hack-A-Turned-Off-Computer-Or-Running-Unsigned-Code-In-Intel-Management-Engine.pdf).
While claims such as "ME is a black box that can do anything" are misleading, ME is not without its share of vulnerabilities. My favorite look at its issues is a presentation by Mark Ermolov and Maxim Goryachy at Black Hat Europe 2017: [How to Hack a Turned-Off Computer, or Running Unsigned Code in Intel Management Engine](https://papers.put.as/papers/firmware/2017/eu-17-Goryachy-How-To-Hack-A-Turned-Off-Computer-Or-Running-Unsigned-Code-In-Intel-Management-Engine.pdf).

In short: ME being proprietary doesn't mean that we can't find out how (in)secure it is. Binary analysis when paired with runtime inspection can give us a good understanding of what trade-offs we make by using it. While ME has a history of serious vulnerabilities, they're nowhere near what [borderline conspiracy theories](https://web.archive.org/web/20210302072839/themerkle.com/what-is-the-intel-management-engine-backdoor/) claim.[^11]

@@ -166,7 +166,7 @@ I readily concede to several points in favor of source availability from a secur
- Closed-source software may or may not have builds available that include sanitizers and debug symbols.
- Although fuzzing release binaries is possible, fuzzing is much easier to do when source code is available. Vendors of proprietary software seldom release special fuzz-friendly builds, and filtering out false-positives can be quite tedious without understanding high-level design.
- It is certainly possible to notice a vulnerability in source code. Excluding low-hanging fruit typically caught by static code analysis and peer review, it's not the main way most vulnerabilities are found nowadays (thanks to <span class="h-card vcard"><a class="p-name url n" href="https://www.broken-by-design.fr/"><span class="p-nickname nickname">X_Cli</span></a></span> for [reminding me about what source analysis does accomplish](https://lemmy.ml/post/167321/comment/117774)).
- Software as a Service can be incredibly difficult to analyze, as we typically have little more than the ability to query a server. Servers don't send core dumps, server-side binaries, or trace logs for analysis. Furthermore, it's difficult to verify which software a server is running.[^14] For services that require trusting a server, access to the server-side software is important from both a security and a user-freedom perspective
- Software as a Service can be incredibly difficult to analyze, as we typically have little more than the ability to query a server. Servers don't send core dumps, server-side binaries, or trace logs for analysis. Furthermore, it's difficult to verify which software a server is running.[^14] For services that require trusting a server, access to the server-side software is important from both a security and a user-freedom perspective.

Most of this post is written with the assumption that binaries are inspectable and traceable. Binary obfuscation and some forms of content protection/<abbr title="Digital Rights Management">DRM</abbr> violate this assumption and actually do make analysis more difficult.

-- 
2.35.1