Not sure if this is an appropriate newsgroup for this topic. Correct me
if I'm mistaken and I'll send this elsewhere.
I was researching NNTP and came across this project: >https://github.com/nntpchan/nntpchan/
Using NNTP as a base protocol for other services. Personally, I think
it's a great idea, and it got me thinking.
Wireless ad-hoc mesh networks are an interest of mine. Normally the
purpose of the network is to route traditional TCP/IP protocol stacks
on top of whatever routing technology (like babel). But for radios,
they broadcast out naturally, it seems like a service like news/store
and forward message sending would be a natural fit.
The idea is to use a smart flooding algorithm, like uflood >(https://pdos.csail.mit.edu/~jaya/uflood_thesis.pdf) and skip all the >routing/high speed packet delivery problems and just flood news
articles over it. I think it would be a good fit.
Usenet is already decentralized, decentralizing the infrastructure seems
like a cool idea. If I were going to do it, I'd add some kind of >proof-of-work scheme to prevent spamming the network. Bandwidth would
be low due to the air-time of a large mesh network being saturated, but
I see that as a plus, prevents abuse (spamming binaries on the net).
It's half baked, but I wanted to put my thoughts out there and see if
other work has already been done on something like this.
On Thu, 20 Mar 2025 18:41:21 -0400, Toaster <toaster@dne3.net> wrote:
Not sure if this is an appropriate newsgroup for this topic. Correct
me if I'm mistaken and I'll send this elsewhere.
I was researching NNTP and came across this project: >https://github.com/nntpchan/nntpchan/
Using NNTP as a base protocol for other services. Personally, I think
it's a great idea, and it got me thinking.
Wireless ad-hoc mesh networks are an interest of mine. Normally the
purpose of the network is to route traditional TCP/IP protocol stacks
on top of whatever routing technology (like babel). But for radios,
they broadcast out naturally, it seems like a service like news/store
and forward message sending would be a natural fit.
The idea is to use a smart flooding algorithm, like uflood >(https://pdos.csail.mit.edu/~jaya/uflood_thesis.pdf) and skip all the >routing/high speed packet delivery problems and just flood news
articles over it. I think it would be a good fit.
Usenet is already decentralized, decentralizing the infrastructure
seems like a cool idea. If I were going to do it, I'd add some kind
of proof-of-work scheme to prevent spamming the network. Bandwidth
would be low due to the air-time of a large mesh network being
saturated, but I see that as a plus, prevents abuse (spamming
binaries on the net). It's half baked, but I wanted to put my
thoughts out there and see if other work has already been done on
something like this.
very nice website . . . https://www.shibaura-it.ac.jp/en/index.html
akaik, the old-fashioned method was public pgp keyrings and
clear-signed plain-text messages for authentication of articles
posted anonymously to unmoderated usenet newsgroups, but there could
be more modern and easier to use technologies for confirming
"proof-of-work" without extra efforts, otherwise it would have to be moderated, such as social media always has
also, news:news.software.nntp is probably the most on-topic newsgroup
to ask the experts there (ditto news:news.admin.peering as you
already know)
Not sure if this is an appropriate newsgroup for this topic. Correct me
if I'm mistaken and I'll send this elsewhere.
I was researching NNTP and came across this project: >https://github.com/nntpchan/nntpchan/
Using NNTP as a base protocol for other services. Personally, I think
it's a great idea, and it got me thinking.
Wireless ad-hoc mesh networks are an interest of mine. Normally the
purpose of the network is to route traditional TCP/IP protocol stacks
on top of whatever routing technology (like babel). But for radios,
they broadcast out naturally, it seems like a service like news/store
and forward message sending would be a natural fit.
The idea is to use a smart flooding algorithm, like uflood >(https://pdos.csail.mit.edu/~jaya/uflood_thesis.pdf) and skip all the >routing/high speed packet delivery problems and just flood news
articles over it. I think it would be a good fit.
Usenet is already decentralized, decentralizing the infrastructure seems
like a cool idea. If I were going to do it, I'd add some kind of >proof-of-work scheme to prevent spamming the network. Bandwidth would
be low due to the air-time of a large mesh network being saturated, but
I see that as a plus, prevents abuse (spamming binaries on the net).
It's half baked, but I wanted to put my thoughts out there and see if
other work has already been done on something like this.
Introducing Proof-of-Work Defense for Onion Services[end quoted plain text]
by pavel | August 23, 2023
Today, we are officially introducing a proof-of-work (PoW) defense for
onion services designed to prioritize verified network traffic as a
deterrent against denial of service (DoS) attacks with the release of
Tor 0.4.8.
Tor's PoW defense is a dynamic and reactive mechanism, remaining dormant >under normal use conditions to ensure a seamless user experience, but
when an onion service is under stress, the mechanism will prompt incoming >client connections to perform a number of successively more complex >operations. The onion service will then prioritize these connections
based on the effort level demonstrated by the client. We believe that
the introduction of a proof-of-work mechanism will disincentivize
attackers by making large-scale attacks costly and impractical while
giving priority to legitimate traffic. Onion Services are encouraged to >update to version 0.4.8.
Why the need?
The inherent design of onion services, which prioritizes user privacy by >obfuscating IP addresses, has made it vulnerable to DoS attacks and >traditional IP-based rate limits have been imperfect protections in these >scenarios. In need of alternative solutions, we devised a proof-of-work >mechanism involving a client puzzle to thwart DoS attacks without >compromising user privacy.
How does it work?
Proof of work acts as a ticket system that is turned off by default, but >adapts to network stress by creating a priority queue. Before accessing
an onion service, a small puzzle must be solved, proving that some "work"
has been done by the client. The harder the puzzle, the more work is
being performed, proving a user is genuine and not a bot trying to flood
the service. Ultimately the proof-of-work mechanism blocks attackers
while giving real users a chance to reach their destination.
What does this mean for attackers and users?
If attackers attempt to flood an onion service with requests, the PoW
defense will kick into action and increase the computational effort
required to access a .onion site. This ticketing system aims to
disadvantage attackers who make a huge number of connection attempts to
an onion service. Sustaining these kinds of attacks will require a lot
of computational effort on their part with diminishing returns, as the
effort increases.
For everyday users, however, who tend to submit only a few requests at
a time, the added computational effort of solving the puzzle is
manageable for most devices, with initial times per solve ranging from
5 milliseconds for faster computers and up to 30 milliseconds for slower >hardware. If the attack traffic increases, the effort of the work will >increase, up to roughly 1 minute of work. While this process is
invisible to the users and makes waiting on a proof-of-work solution >comparable to waiting on a slow network connection, it has the distinct >advantage of providing them with a chance to access the Tor network even
when it is under stress by proving their humanity.
Where do we go from here?
Over the past year, we have put a lot of work into mitigating attacks on
our network and enhancing our defense for onion services. The
introduction of Tor's PoW defense not only positions onion services
among the few communication protocols with built-in DoS protections but
also, when adopted by major sites, promises to reduce the negative
impact of targeted attacks on network speeds. The dynamic nature of this >system helps balance the load during sudden surges in traffic ensuring
more consistent and reliable access to onion services.
On Thu, 20 Mar 2025 23:56:47 +0000
D <noreply@mixmin.net> wrote:
On Thu, 20 Mar 2025 18:41:21 -0400, Toaster <toaster@dne3.net> wrote:
Not sure if this is an appropriate newsgroup for this topic. Correct
me if I'm mistaken and I'll send this elsewhere.
I was researching NNTP and came across this project:
https://github.com/nntpchan/nntpchan/
Using NNTP as a base protocol for other services. Personally, I think
it's a great idea, and it got me thinking.
Wireless ad-hoc mesh networks are an interest of mine. Normally the
purpose of the network is to route traditional TCP/IP protocol stacks
on top of whatever routing technology (like babel). But for radios,
they broadcast out naturally, it seems like a service like news/store
and forward message sending would be a natural fit.
The idea is to use a smart flooding algorithm, like uflood
(https://pdos.csail.mit.edu/~jaya/uflood_thesis.pdf) and skip all the
routing/high speed packet delivery problems and just flood news
articles over it. I think it would be a good fit.
Usenet is already decentralized, decentralizing the infrastructure
seems like a cool idea. If I were going to do it, I'd add some kind
of proof-of-work scheme to prevent spamming the network. Bandwidth
would be low due to the air-time of a large mesh network being
saturated, but I see that as a plus, prevents abuse (spamming
binaries on the net). It's half baked, but I wanted to put my
thoughts out there and see if other work has already been done on
something like this.
very nice website . . . https://www.shibaura-it.ac.jp/en/index.html
akaik, the old-fashioned method was public pgp keyrings and
clear-signed plain-text messages for authentication of articles
posted anonymously to unmoderated usenet newsgroups, but there could
be more modern and easier to use technologies for confirming
"proof-of-work" without extra efforts, otherwise it would have to be
moderated, such as social media always has
also, news:news.software.nntp is probably the most on-topic newsgroup
to ask the experts there (ditto news:news.admin.peering as you
already know)
Thank you for the advice, very much appreciated. It's my belief that >moderation should be at the community/group level and not centrally
done. I think usenet's model is pretty good if used properly.
the implementation i have for this in my head is a series of small
raspberry pi type SoC chips with a radio and a battery. use low
frequency band like 27Mhz. Have a bunch of people in the neighborhood
install one in their house, they all peer via flooding, no setup
needed. with a small battery, itd run when nothing else will. each peer
can run some sort of verification to prevent spam.
nice thing about text, it's small and doesnt have to be real-time.
compress for more throughput.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,028 |
Nodes: | 10 (0 / 10) |
Uptime: | 129:25:50 |
Calls: | 13,329 |
Calls today: | 1 |
Files: | 186,574 |
D/L today: |
465 files (115M bytes) |
Messages: | 3,355,287 |