List of services which are giving access to Tor network and especially Tor hidden services
via web interface. We keep track of potential injection or abuse from such service (the column Scam
).
Url | Status | Domain | Log | Techno | Scam |
---|---|---|---|---|---|
https://onion.re/ | DOWN | onion.re | full | custom | no |
https://onion.foundation/ | DOWN | onion.foundation | full | custom | no |
https://onion.gg | DOWN | onion.gg | full | custom | no |
https://onion.nz/ | DOWN | onion.nz | |||
https://onion.to/ | DOWN | onion.to | |||
https://onion.city/ | DOWN | onion.city | |||
https://onion.cab/ | DOWN | onion.cab | |||
https://onion.direct/ | DOWN | onion.direct | |||
https://tor2web.io/ | DOWN | tor2web.io | |||
https://onion.pet | UP | onion.pet | |||
https://onion.link/ | DOWN | onion.link | |||
https://onion.ly/ | UP | onion.ly | |||
https://cyber-hub.pw/tor2web.php | DOWN | yes | |||
https://onion.nu | DOWN | onion.nu | |||
https://onion.gg | DOWN | onion.gg | |||
https://onion.sh | DOWN | onion.sh | |||
https://onion.moe | DOWN | onion.moe | |||
https://onion.ws | DOWN | onion.ws | yes | ||
https://onion.xyz | DOWN | onion.xyz | yes |
@denzuko Yeah there's a reason for that. Which @reisenbauer mentions further up in this repo.
It's a bad idea running a public-facing proxy for hidden services, because so many of them are illegal and there's no point in running a proxy for only known URLs either, because ... for lack of a proper reason... well, that's just stupid.
(Most of the allowed URLs would then be services already available in the clear web)
Another option if someone wanted to dedicate his life would be to send and confirm idnvidual URLs with a Telegram bot.
You could then ask the user to provide a reason as to why they want it in the clear web.
If someone wanted to re-share a personal blog that way, I'd think that should be perfectly fine.
Though technically it's also still a copyright issue, additionally.
Unless it's mentioned to have "Creative Commons".
Basically the author would have to include a specific header in every page if you wanted to be safe.
<meta name="allow-mirroring" value="true">
or
<meta name="content-license" value ="CC0">
... something like that
Internet Archive for example has been operating freely under the assumption that Fair Use grants it a right to copy "limited works" for education (historic preservation could be a valid claim)
There's a few simple reasons why this approach always has worked so far.
Mainly the ability for site owners to exclude crawling of the pages with a specific user agent in the ROBOTS.txt file.
Personally I don't think that web.archive.org (IA) a good legal standing there due to the nature of fair use being defined very flimsy.
But believe me when I say... I don't think any sane "regular internet user" copyright laws updated.
It likely won't be working in favor of the consumers and mainly in favor of large corporations;
allowing them to take down content in split seconds without proper review.
(This is a nightmarish scenario and there is currently no evidence to support that the U.S. could enforce such a system worldwide.)
I bet the internet will do everything in it's power to aide withstanding any legal issues they might face in the future.
IF AND WHENEVER YOU CAN - DONATE! https://archive.org/donate?origin=github-spookyhell
(I'm not an affiliate nor do I know if they'll even track it with this origin)