Hacker Newsnew | past | comments | ask | show | jobs | submit | dbushell's commentslogin

it costs money to send a lot of emails that aren't immediately blocked or sent to the junk folder


yep! I decided to repurpose the pikaday.com domain because it was still seeing a lot of traffic despite the project being unmaintained for years.


those dithering perverts how dare they try to educate us with their fancy graphics!


A big misconception I've seen is the assumption that Nostr relays are federated and share messages between one another. This is not how it works. So if you're building a "Twitter clone" the client app must search multiple relays and post to multiple relays. If clients are not using a relay in common they cannot see one another.

The end result is a bad experience for both user and developer. Using a single relay is centralised and defeats the point. Using multiple relays is slow and cumbersome and requires the user to know/care which relays they are connecting to.

When I played with Nostr a couple years ago the "NIPs" were already a complete mess. Later NIPs supersede earlier NIPs changing how clients are supposed to interpret messages. At least some are flagged as "unrecommended: deprecated" now.


Relays can federate. The point is that Nostr as a protocol is saying nothing about this and does not care either.

I'm running an indexer (a relay) which federates with other relay indexers. Similar how activitypub relays work. Any client can connect to indexer to help bootstrapping and find metadata around events. There are many ways to discover stuff from clients even without being connected to the same relay.


This is a valid observation and hurdle of sorts. One to me, which is a fascinating problem to work on. There are a few approaches to solve this. For instance NIP65, where one defines on their profile meta which relays they read/write to, giving clients the ability to discover all the right content. That's just one approach, and some are exploring other ideas. It seems like a very solvable problem anyway.


That's a misconception: you don't "use" relays (in the sense that you don't have to have a static list of relays you always use), you write to relays. When reading you connect to the relays of whatever the people you want to read from.

Some apps indeed use this method of selecting a static set of relays, and if that was the protocol you would be correct about centralization or bloat, but this is legacy from a naïve unfinished early implementation, most apps do the correct thing now and the rest is transitioning.


Most clients now support outbox, so you don't need a common relay. Users have inbox and outbox relays, and clients use these to retrieve and send notes.


Since relays don't own user generated content, there is no need to "federate" client's generally rely on user-selected relay sets. The user chooses where they want to read/write events to/from.

That said, many of the "larger" relays do store events from other relays (federation if you prefer). Primal does, TheForrest does, nostr.land and so on. Nostr.land specifically has a purpose of aggregating notes from many other public relays, with spam filtering. It's a paid relay built for that purpose. Don't want that, use someone else.

Most users get to see 99% of notes from the current relay federation now, but it's also impossible to check those metrics.

Certain clients and signers store notes privately so if a relay ever decides to censor your notes you just publish to a different relay if they don't have your notes already.

Chances are if you use ANY of the popular paid relay providers, your going to get warnings on 3/4 write events that the other relays _already_ have the note published to the first. It's usually that quick...

Finally, relays "federate" by acting as clients themselves. Most relay software available already offers this as an option, may use it as a local cache for when on mobile and network/wifi is slow. Their local relay slowly pulls notes from other relays (or outbox) and caches those notes for when they load their client up. It's cache and the client dev didn't even have to write that functionality, it was transparent.

Finally, other's mentioned outbox, which has it's own set of issues as well, but it doesn't matter because a client developer can choose to give the user the option if they want. Going from federated, to peer-to-peer which was the intention.


There are some messed up things on a few NIP because the technology evolved fast.

Most NIP are fine and continuously improved.

This is trivial to solve when there is there a periodic release of the NIP as done in other specs. So far there hasn't been much need for that formality, most developers understand quickly how to create tools on top of it.


Yep. There is no common model for message propagation, so there is no “net force” or clear direction.


It is somehow misleading to feature a Twitter clone on the front page when Mastodon is a better way to achieve that.

The protocol's real value lies in other use cases.


Mastodon merged their server-side recursive fetching of remote replies feature in the summer of this year so unless instance admins used 3rd party scripts to achieve that you couldn't rely on your reply actually being shown to the recipient. ActivityPub is complicated like that.


Nostr has a better UX than ActivityPub IMO, for the basic reason that you do not need to learn how to self-host a server or depend on an unaccountable admin that might shut down the server at any time. In Nostr, you just create an account and go!

The big issue is the lack of content, but that's a social problem, not a UX problem.


Nostr’s UX on Primal is 10x better than Mastodon imo. I haven’t looked into how it works but every time I try an application with it, it’s been an unpleasant experience.


> Delete old emails and pictures as data centres require vast amounts of water to cool their systems.

Also the UK government:

> Taken together, the 50 measures will make the UK irresistible to AI firms looking to start, scale, or grow their business. It builds on recent progress in AI that saw £25 billion of new investment in data centres announced since the government took office last July.

https://www.gov.uk/government/news/prime-minister-sets-out-b...


I'm surprised anyone wants to put a data center in the UK given our top of the range electricity costs.


When George Orwell wrote 1984 it wasn’t just the Soviet Union he was criticizing.

He was also satirizing some of the most absurd parts of British political culture too. Doublethink wasn’t just part of Soviet political culture. It was part of British political culture and clearly still is based on what you just posted.


There's a reason 1984, V for Vendetta and all such stories come from the UK.


Soviet doublethink?

Is this holding two ideas in your head at the same time, or holding contradictory ideas in your head at the same time?


I heard Russel Brand once say something like the mark of an intelligent man is the ability to hold two conflicting ideas. And then I remembered his first TV job as Big Brother's Big Mouth!


That’s a quote from F. Scott Fitzgerald.

https://quoteinvestigator.com/2020/01/05/intelligence/


However, holding two conflicting ideas and not even realising the contradictions is a mark of a brainwashed individual.


I imagine he was probably still pretty salty about the whole Gollancz/Homage to Catalonia thing.


Nevermind water, there were (still are?) restrictions on new builds in the West London area because the grid could not cope due to the ever increasing number of data centres popping up in old trading estates.


The web app makes 176 requests and downloads 130 megabytes.

It's also <div> soup and largely inaccessible.

These and other issues could be fixed fairly quickly with a little care (if anyone cares).


!!!

Yeah, it turns out those speaker avatar images are 1MB+ PNGs! And there are 170 of them.

What a fantastic cautionary tale about vibe-coding on a mobile phone (where performance analysis tools aren't easily available.)

I just fixed that with Codex - thanks for the tip: https://chatgpt.com/s/cd_6879631d99c48191b1ab7f84dfab8dea

As far as accessibility goes... yeah, the lack of semantic markup is pretty shocking! I'll remember to prompt for that next time I try anything like this.

I just tried it in VoiceOver on iOS and the page was at least navigable - the buttons for the days work - but yeah, I'd be very ashamed to ship something like this if it wasn't a 20 minute demo (and I'm a bit ashamed even given that.)

I'm running "Make open-sauce-2025.html accessible to screenreaders" in Codex now to see what happens.


OK, Codex seemed to do a decent job of fixing up the accessibility, I just shipped its changes: https://github.com/simonw/tools/issues/36


it's concerning these vibe coding tools must be coerced into semantic markup

could that be solved by prefixing every prompt with a reminder?


> it's concerning these vibe coding tools must be coerced into semantic markup

These things are becoming more and more humanlike every day


Definitely. If I had a Claude.md or agents.md or whatever in that repo saying "make mobile friendly sites that use semantic HTML and are accessible for screenreaders" I bet I'd get much better results from them.


would have been fun to see how conference wifi handled that :)


Do developers not see the irony of publishing AI generated code with a LICENSE?

at least this one does:

> Postscriptum: Yes, I did slap an Apache 2 license on it. Is that even valid when there's barely a human in the loop? A fascinating question but not one I'm not eager to figure out myself. It is however certainly something we'll all have to confront sooner or later.


The irony is even deeper than it appears. According to current US copyright doctrine, if Claude genuinely did all the work with minimal human creative input, the Salt Bae dash of ASL2.0 is essentially decorative - you can't license rights that don't exist.

The research shows the US Copyright Office hasn't caught up with `claude` code: they claim that prompting alone doesn't create authorship, regardless of complexity. Without "substantial" human modification and creative control over the final expression, the code lands in public domain by default. Not that it matters here, but anyone could theoretically use Ronacher's library while ignoring the Apache 2 terms entirely.

What makes this fascinating is that Ronacher knows this ("Is that even valid when there's barely a human in the loop?") but published anyway. It's a perfect microcosm of our current predicament - we're all slapping licenses on potentially unenforceable code because the alternative is.. what exactly?


> What makes this fascinating is that Ronacher knows this ("Is that even valid when there's barely a human in the loop?") but published anyway.

That has very pragmatic reasons. People should be able to use this library, in my jurisdiction I cannot place things in the public domain. So there are two outcomes: the Apache 2 license is valid and you can use it, or it was impossible to copyright it in the first place and it's in the public domain. Either way you are free to use it.

I'm not sure what else I can really do here.


It makes it easier for users to enable permissions, accidentally too, and thus lower security and privacy. Google products are designed to exploit that. Google probably has data showing a large number of users have disabled such permissions globally, with no easy path to trick them into opting back in. That would be the cynical view!

edit: also one can never be too paranoid around Google.


> It makes it easier for users to enable permissions, accidentally too, and thus lower security and privacy. Google products are designed to exploit that.

I learned a while back that Google Maps was moved from maps.google.com to google.com/maps so that when people gave location permission to Maps, Google Search could also use that permission.


> I learned a while back that Google Maps was moved from maps.google.com to google.com/maps so that when people gave location permission to Maps, Google Search could also use that permission.

This does not appear to be the case, at least on iOS Safari. I went to Google Maps, gave it permission, then went to Google Search and searched for “delivery near me.” It again asked me for permission.


I imagine browsers have special logic for misbehaving major websites baked in all over the place.


Yes your example is exactly the same and yet I fear you're missing the point entirely.


It’ll run via an ESM import of an NPM dependency, Deno just doesn’t allow commonjs at the top level


Reading the documentation, it seems that `require` at the top-level should be possible under one of three scenarios:

(1) the file extension is "cjs",

(2) there is a package.json file with "type" set to "commonjs", or

(3) a require function is created with createRequire

https://docs.deno.com/runtime/fundamentals/node/#commonjs-su...

Seems like the code from the article will run as-is with either of the first two options. In any case, this is what I meant by Deno not being able run the code "unless one explicitly configures" it.


The point still stands


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: