Hacker Newsnew | past | comments | ask | show | jobs | submit | xinbenlv's commentslogin

That's a good idea too, thanks for the suggestion

Also, now that I think about it you could source the super secret stuff from a second file and keep the .env file publicly readable and available for quick edits while streaming

Good points

I am interested, that said, there are many memory and context services. What makes this one different?

Thank you for your interest! That's a great question, and I'd love to give you a thoughtful comparison. Could you share which memory and context services you've been looking at or are most familiar with?

It happened multiple times to me on Claude Code too, next time I caught it I will try to record its history and show it here

My question is not about on or off storage, is more about when you give agent access, it assume the environment agent runs is safe

What if you simply need to give them access. E.g if you want them to do code review you have to at least give them code repo read access. But you don't know if the environment where agent runs will be compromised

If you give read access with their own api key they will only get read access, the access that you gave them. Not sure what is the issue.

is the permission device+client based or role based?

Any prompt injection attack could by pass this by simply do a base64 or any encoding, I guess?

You ar absolutely right. Obfuscation like Base64 or rot13 will always beat static Regex. I was thinking more in terms of a seatbelt for accidental leaks user error rather than a defense against adversarial prompt injection. It's about reducing the blast radius of clumsy mistakes, not stopping a determined attacker.

Xoogler here too, yes, we used Gmail and Google Workspace at Google.

You can search for an exact string if you use quotes. I think you can also filter out logos


We use infiscial and other mechanism but hey, wouldn't it be nice to have one less square inch of attack surface?

Why not have one less square mile of attack surface by not having secrets in a .env file in the first place?

What are people doing that requires something like this?


I think it's common to have dev not production secrets there, and am reading the blurb about production secrets as non-local secrets. Even dev keys are a pain if they get leaked.

The idea seems nice with a simple yet effective implementation. While I think I currently have a shell script syntax highlight plugin reading env files, it's definitely overkill. Now if only this could protect from random npm packages reading your env files...


Thanks @pjjpo, exactly. My bad to confuse people, no we don't put real prod-prod credentials in .env. We use mechanisms to ensure separation of secrets. Thank you for saying that it's a simple yet effective implementation. If you try it and let us know your feedback.

This implies there's some kind of shared resource out there on the network that your devs are developing on. Why not make all these resources part of your local dev stack, served on localhost, and use dummy credentials? You can even commit them because they're not sensitive.

Ok ok, it is indeed keys to AI APIs. I know it's not kosher to admit to that on HN anymore but it's the reality for me at least. Unfortunately local models just can't support development of products using them.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: