~jackmordaunt

Australia

https://jackmordaunt.srht.site

Software Developer that enjoys clean, well structured code.

~jackmordaunt/public-inbox

Last active 21 days ago

~jackmordaunt/gio-planet

Last active 4 months ago

~jackmordaunt/kanban-ci

Last active 11 months ago

~jackmordaunt/audiotube-devel

Last active 1 year, 1 month ago

~jackmordaunt/audiotube-announce

Last active 1 year, 1 month ago
View more

Recent activity

[PATCH 2/2] list: [fix] copy state update slice to avoid data race 8 days ago

From Jack Mordaunt to ~gioverse/chat

I observed a data race between layout and the process goroutine that
appears to be a result of sharing the element slice.

While querying for which serials are in the viewport during layout, the
async process was updating those same elements during a modify request.

After reviewing the code I concluded that the slice was being shared and
should rather be copied before reaching layout - that is, before being
sent over the updates channel.

After applying this change, I could not reproduce the race condition.

Signed-off-by: Jack Mordaunt <jackmordaunt.dev@gmail.com>
---
[message trimmed]

[PATCH 1/2] async: [fix] respect max loaded worker count 8 days ago

From Jack Mordaunt to ~gioverse/chat

Signed-off-by: Jack Mordaunt <jackmordaunt.dev@gmail.com>
---
 async/loader.go | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/async/loader.go b/async/loader.go
index e3e72f9..1afe7e0 100644
--- a/async/loader.go
+++ b/async/loader.go
@@ -223,7 +223,7 @@ func (l *Loader) Schedule(tag Tag, load LoadFunc) Resource {
			// 128 is a magic number of maximum workers we will allow.
			// This would translate to "max number of network requests", if all
			// work were to be network-bound.
			l.Scheduler = &FixedWorkerPool{Workers: 128}
[message trimmed]

[PATCH 0/2] fix race condition and max worker count 8 days ago

From Jack Mordaunt to ~gioverse/chat

Hi Chris,

This patchset contains two independent commits. The first one is the worker pool
limit that I thought I had pushed, but actually only pushed to my own remote. 

The second fixes a race condition observed during pgc use. Details inside. 

Thanks!

Jack Mordaunt (2):
  async: [fix] respect max loaded worker count
  list: [fix] copy state update slice to avoid data race

 async/loader.go     | 2 +-

[PATCH 5/5] example/page: rename UI.Pages => UI.Router 14 days ago

From Jack Mordaunt to ~whereswaldon/public-inbox

Signed-off-by: Jack Mordaunt <jackmordaunt.dev@gmail.com>
---
 example/page/ui.go | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/example/page/ui.go b/example/page/ui.go
index 164df78..ba14b70 100644
--- a/example/page/ui.go
+++ b/example/page/ui.go
@@ -28,33 +28,33 @@ type UI struct {
	SidePanel widget.SidePanel
	// NavSlugs shows the global slugs for navigation.
	NavSlugs widget.NavSlugs
	// Pages maintains inter-page state, such as history.
[message trimmed]

[PATCH 4/5] router: split out concrete routers 14 days ago

From Jack Mordaunt to ~whereswaldon/public-inbox

Signed-off-by: Jack Mordaunt <jackmordaunt.dev@gmail.com>
---
 router/dynamic.go |  65 +++++++++++++++++++++
 router/router.go  | 144 +---------------------------------------------
 router/static.go  |  80 ++++++++++++++++++++++++++
 3 files changed, 146 insertions(+), 143 deletions(-)
 create mode 100644 router/dynamic.go
 create mode 100644 router/static.go

diff --git a/router/dynamic.go b/router/dynamic.go
new file mode 100644
index 0000000..4851276
--- /dev/null
+++ b/router/dynamic.go
[message trimmed]

[PATCH 3/5] router: document router.NewStatic 14 days ago

From Jack Mordaunt to ~whereswaldon/public-inbox

Signed-off-by: Jack Mordaunt <jackmordaunt.dev@gmail.com>
---
 router/router.go | 1 +
 1 file changed, 1 insertion(+)

diff --git a/router/router.go b/router/router.go
index b533591..c540399 100644
--- a/router/router.go
+++ b/router/router.go
@@ -114,6 +114,7 @@ type Static struct {
	current string
}

// NewStatic allocates an empty Static router.
[message trimmed]

[PATCH 2/5] example/page: integrate new router api 14 days ago

From Jack Mordaunt to ~whereswaldon/public-inbox

Signed-off-by: Jack Mordaunt <jackmordaunt.dev@gmail.com>
---
 example/page/ui.go | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/example/page/ui.go b/example/page/ui.go
index 3b0af2b..164df78 100644
--- a/example/page/ui.go
+++ b/example/page/ui.go
@@ -29,7 +29,7 @@ type UI struct {
	// NavSlugs shows the global slugs for navigation.
	NavSlugs widget.NavSlugs
	// Pages maintains inter-page state, such as history.
	Pages router.Router
[message trimmed]

[PATCH 1/5] router: export History stack for external consumption 14 days ago

From Jack Mordaunt to ~whereswaldon/public-inbox

I don't think we need to guard the history stack from consumers, and the
example relies on it for drawing the nav slugs.

Signed-off-by: Jack Mordaunt <jackmordaunt.dev@gmail.com>
---
 router/router.go | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/router/router.go b/router/router.go
index 294d605..b533591 100644
--- a/router/router.go
+++ b/router/router.go
@@ -189,7 +189,7 @@ func (d *Static) CurrentRoute() []string {
// Historical is a router that preserves history for one or more nested Subrouters.
[message trimmed]

[PATCH 2/2] scheduler: [test] bench batches of messages 18 days ago

From Jack Mordaunt to ~whereswaldon/public-inbox

Introduce a second test that benchmarks how the system handles many
messages, instead of single messages at a time.

We focus on manipulating the buffer size, and send batches of 1 Million
messages through the system at once.

100 connections (100 theoretic windows), and 1M messages.

This is not a "realistic use" test, but the idea is to see how it
performs at extremity.

Results
=======
[message trimmed]

[PATCH 1/2] scheduler: [perf] optimize out the extra channel send 18 days ago

From Jack Mordaunt to ~whereswaldon/public-inbox

This change removes the `go w.run()` goroutine, and the associated
channel.

The scheduled closures call `send()` which takes a read lock and does
the fanout.

This reliably reduces allocations and my benchmarking shows pretty
decent time/op benefits for such a simple change.

Before
======

```
sz10-buf0-conn1-tick16ms-rend10ms-12         	   14698	    149130 ns/op	     123 B/op	       3 allocs/op
[message trimmed]