Same here
Jackett Version 0.8.886.0
Windows Education 10.0.16299
Same.
Same here. Tried to delete the tracker and add it again. Even that failed.
It seems like they are blocking scraping tools like Jackett again:
Message when accessing it (via jackett):
You are either using scraping tool or JavaScript is disabled. If this is the first case, please stop here and do not proceed further, instead use RSS feeds for scraping data. In second case please enable JavaScript and refresh this page twice (you are free to disable it again after that).
Please start a discussion in their forums regarding this.
This is the response I got from the support staff:
There's not much we can/will do if Jacket broke and not much productive to come out of discussing it on site. They need to update their code to support AB.
Is it the search api that's failing or the login api?
Looks like another cookie has been added without which it's throwing the 403 Forbidden Error.
Looks like staff member misinterpreted your question. Yes, we are blocking any scraping tools from performing queries to search page via means of dynamically created cookie in JavaScript - while obviously you can go around this, it will achieve nothing except cat and mouse game and will show that you are willing to ignore wishes of staff as message is quite clear on the fact that we do not wish for Jackett or other scraping tools.
In future we may consider opening search API accessible similarly to RSS feeds via passkey authentication and in such case Jackett would be free to use it. For now however, we would like to request removal of AnimeBytes support from Jackett.
Is it the search api that's failing or the login api?
What kind of API? Jackett requests HTML pages and operates on DOM elements to create requests similarly to as if you've clicked using browser - there is no API there.
Sorry, wording went across wrong. Meant to ask whether the login was failing or the search. Turns out its the search.
So, basically, since the RSS feed doesn't have more than 50 items at the moment, we can only perform manual searches on AB? This is a huge inconvenience but that's fine for now. Hoping for the search API to come soon.
Also, is there any reason only the search page is blocked unless the dynamic cookie is present and the login and main page are allowed?
Also, is there any reason only the search page is blocked unless the dynamic cookie is present and the login and main page are allowed?
We still want to allow users to use the site without JavaScript - this check is performed only on search page as a means of weighting between ability to use site without JavaScript and blocking any scrapper tool.
So, basically, since the RSS feed doesn't have more than 50 items at the moment, we can only perform manual searches on AB? This is a huge inconvenience but that's fine for now. Hoping for the search API to come soon.
By all means you should use RSS custom feeds (Power User+) if you want to fetch new releases. Alternative is to fetch global RSS feed and performing check in client or using something like autodl-irssi. We see no reason behind implementing special feature that would allow to externally search site, despite this we obviously see this form of browsing gaining some traction (which I personally don't understand, but let's not discuss why Sonarr/Radarr even exists in the first place here) and external search API (Torznab API in future, when we get to reworking search engine) is on TODO list.
There are many reasons why we don't want scraping tools like Jackett, but most important ones are here:
It can't handle two-factor authentication
It forces user to provide and store unprotected password in configuration file, often on poorly configured and secured seedboxes (yes, it's possible to gather passkeys from sonarr/radarr under certain circumstances on shared seedboxes)
Sharing improper security practices - almost every time GitHub issue was opened regarding issue with AnimeBytes you were asking user to change password to less complicated one that did not contain special characters - this is absolutely unacceptable.
We do not allow log-ins from VPNs and shared seedboxes, however many people do not consider Jackett a browser, despite what it really does internally to function. Because of this we had to explain it multiple times and we're getting tired of this. This also includes cases where user is not aware of why their shared seedbox setup doesn't work (yes, despite information you provide - users have tendency not to read apparently).
Thanks for the reasoning behind the decision, @proton-ab. Hopefully, after seeing this people won't raise any more related issues, but I wouldn't count on it.
I believe this should also be posted on the forums at AB.
@proton-ab
Thank you for sharing your reasons. While I'm personally not a big Anime FAN and don't use AnimeBytes at all my understanding is that AnimeBytes is considered the best Anime Torrent tracker available. Due to that I would prefer it if Jackett users could continue to use it or have an equivalent replacement (RSS/torznab feed with search feature). Here are some comments and suggestions from my side:
regarding 2FA: We've e.g. this enhancement issue open: #1872. Jackett is open source but so far no one bothered implementing it. But if that's really a showstopper for you I'll invest the time and implement it.
Passwords are encrypted using the Microsoft Data Protection API provided by the .NET environment. Of course this way of protecting sensitive data isn't perfect but I consider it one of the better ways of doing it. After all there's no perfect way of protecting a shared secret. The best way of protecting data is not having to store it. Unfortunately very few trackers support the concept of API keys which would allow this.
Your recommended solution of using the RSS feed/autodl-irssi is actually worse. I'm not aware of any torrent client encrypting RSS URLs. autodl-irrsi definitely stores all passkeys, etc. unencrypted and by default the autodl config directory/file is readable by anyone. I believe jackett does a much better job (still not perfect) here protecting passwords than other tools (e.g. it won't start/work if the encryption keys are world readable). If you have any specific improvement suggestion please let us know.
Sharing improper security practices: I did a quick search for animebytes+password and found exactly one issue (#253) from >2 years ago where a password change was suggested by a developer which is no longer active. I fully agree with you that it's bad practice but I don't see any trend of us recommending weak passwords. If I ask about special characters in passwords like I did in #1535 this is only with the intention of identifying bugs and fixing them. It happens very rarely if I'm running out of ideas of other causes.
Regarding users not being able to read: we have this problem too. Back in Oct 2017 we improved the error handling via https://github.com/Jackett/Jackett/commit/07744ab88f0eaf4ea220984925bc7a06ef8a962e That might have increased the amount of support tickets by these users. We can add some note to the configuration dialog to remind these users about the shared IP policy and change the error message which hopefully would help reducing necessary support requests.
It seems like you removed/disabled the ajax.php API which is usually provided by many gazelle based trackers. May I ask why? If you could enable it again we would change the indexer to use it (like we already do for many other trackers). That would eliminate any torrents.php/torrents2.php scraping.
Alternatively I would suggest a temporary hybrid solution: We query the RSS feed if there are no search keywords and fallback to scraping if there are search keywords. While we would lose some information which aren't available via RSS in this case, it should be better than nothing. Once there's a API providing search functionallyity we would completely migrate to it.
I belive it's better to work together, I definitly won't start a cat and mouse game with you. But I believe the time invested into implementing the javascript cookie mechanism could have been better invested into something more usefull.
If you need any help with implementing a better (Torznab compatible?) API, I'm happy to help.
First of all, thank you for continuing this thread, we would definitely like to find a common solution to this issue.
regarding 2FA: We've e.g. this enhancement issue open: #1872. Jackett is open source but so far no one bothered implementing it. But if that's really a showstopper for you I'll invest the time and implement it.
That would definitely be a huge step forward, especially considering amount of 'hacks' using leaked passwords that trackers are seeing, however it won't change anything for AnimeBytes specifically - torrents.php won't be available to Jackett anymore, however alternatives are coming (see below for details)
Regarding encryption of secrets, I don't have any specific suggestions and I didn't really look at code to see how it's implemented. Assuming you are checking permissions that should be enough to satisfy me for now.
If I ask about special characters in passwords like I did in #1535 this is only with the intention of identifying bugs and fixing them. It happens very rarely if I'm running out of ideas of other causes.
Regardless for whatever reason it's asked for, it's plain bad as people will likely not update it afterwards and others will start using it as a way of reducing possible bugs after seeing such suggestions.
It seems like you removed/disabled the ajax.php API which is usually provided by many gazelle based trackers. May I ask why? If you could enable it again we would change the indexer to use it (like we already do for many other trackers). That would eliminate any torrents.php/torrents2.php scraping.
Sadly we're using non-standard Gazelle search engine. The ajax API you mention returned only group IDs. We plan to open scrape.php soon that will return JSON objects of groups and torrents inside groups, accessible completely via passkey which will eliminate need for storing passwords, 2fa, cookies or any other issues that come with it. I can not give you any estimate on it, sadly this implementation needs working with legacy code and our priorities are set on reworking legacy Gazelle code into Tentacles freamwork, but I can assure you that it will come and we will announce it on Dev Blog and here.
But I believe the time invested into implementing the javascript cookie mechanism could have been better invested into something more usefull.
Actually the JavaScript cookie mechanism is so simple that it takes single line of code in JavaScript and 2 lines of code on PHP side (including newline).
If you need any help with implementing a better (Torznab compatible?) API, I'm happy to help.
Sadly this is scheduled to be done whenever we get to reworking search engine which implies migrating from Sphinx to something better and rewriting A LOT of legacy code - it can't be done until smaller parts of rewrite are done, otherwise we would be stuck into rewriting half of the site at once which would quickly turn into a never-finished mess.
@proton-ab thank you for explaining it. Seems like you have to deal with a lot of legacy stuff. What about my suggestion of allowing limited scraping (use RSS for latest and use scraping for search requests)? Would that be an acceptable temporary solution until you finished the new API?
You should be able to use https://animebytes.tv/scrape.php?torrent_pass=[:passkey]&type=[music,anime] now. Rest of parameters mirror torrents.php ones with exception of action=advanced which is implied and hence not required. Result limiting is set to 50.
There is no need to log in or authenticate in any other way than providing passkey. Additionally such scraping is exempt from any form of VPN ban as it does not trigger browse action on user account (similarly to RSS feeds or downloading torrent file)
Thank you so much for the fast turn around @proton-ab.
2 observations:
https://animebytes.tv/scrape.php?torrent_pass=[:passkey]&type=anime&searchstr=Boku&search_type=title&year=&year2=&tags=&tags_type=0&sort=time_added&way=desc&hentai=2&releasegroup=&epcount=&epcount2=&artbooktitle=
@proton-ab thank you for the quick solution.
Had a quick look at the output, number of files and torrent size seems to be missing. If you could add the upload time too it would be even better.
I'll try to find some time to migrate the Jackett implementation to the scrape.php API tomorrow.
Had a quick look at the output, number of files and torrent size seems to be missing. If you could add the upload time too it would be even better.
Done
Even if the type is set as "anime", it also returns non-anime results when searching for something. Just for reference, my search was for
type=[anime,music] is equivalent to torrents.php and torrents2.php - it's general type where anime is everything non-music. You can further narrow categories within each type by using same parameters as with torrents.php
@proton-ab almost finished the jackett update but it's currently useless because the API doesn't include the group title.
Example:
https://animebytes.tv/scrape.php?torrent_pass=XXX&type=anime&searchstr=Dragon%20Ball%20Super
First result group is https://animebytes.tv/torrents.php?id=24245 but the title (Dragon Ball Super) is not included in the JSON.
Currently the Name attribute seems to include the artist instead of the show title.
If you could add the corresponding individual values (artist, title and year) as individual fields (instead of concating them into the name field) it would be perfect.
Nice catch, that's a bug obviously. I'll also split them into separate fields.
@kaso17 No way to email you, so leaving this here in the hopes that you see it before anyone else. Animebytes.cs has someone's (hopefully not yours) passkey publicly visible.
Animebytes.cs has someone's (hopefully not yours) passkey publicly visible.
Passkey has now been revoked.
The bug with names should be now fixed. I've also split name into parts, however do note that for type=anime GroupName will contain something like 'TV Series' while actual name will be under 'SeriesName', while for type=music GroupName will contain actual album name. This is how we actually store data right now so not much we can do about it.
@darthShadow Thank you for letting me know. It's was my key, very embarrassing.
Jackett v0.8.929 now contains a working AnimeBytes indexer again.
I tried changing as little as possible of the existing logic. If someone notices any bugs or has an improvement suggestion please let us know.
@proton-ab: thank you for making this happen
@kaso17 @proton-ab, thanks for all the effort you guys put in so far. Jackett was patched to the new version, but unfortunately, when I try to test the indexer after pasting my passkey, I get a parse error. The original error is in Dutch (because of my OS language), but it roughly translates to: "The object reference is not set on an instance of an object."
@NewBlueMew Can you try deleting and adding the indexer again? I am using it myself without any issues so far.
@darthShadow, tried it several times, didn't solve it for me. Also tried selecting some of the checkboxes (include RAW, add E0, etc), but none resolved the parse error.
If I'm the only one with this issue, I'll try to reinstall Jackett, see if that solves my issue.
Reinstalled Jackett, AnimeBytes is working flawlessly now :). Thanks for the help!
@kaso17 Heads up - we now require 'username' param to be present along with 'torrent_pass'. It should contain string matching username of account that passkey from 'torrent_pass' belongs to. Check is case-sensitive.
Most helpful comment
You should be able to use https://animebytes.tv/scrape.php?torrent_pass=[:passkey]&type=[music,anime] now. Rest of parameters mirror torrents.php ones with exception of action=advanced which is implied and hence not required. Result limiting is set to 50.
There is no need to log in or authenticate in any other way than providing passkey. Additionally such scraping is exempt from any form of VPN ban as it does not trigger browse action on user account (similarly to RSS feeds or downloading torrent file)