When the battle starts the game speed Up really fast on both players.
the game should not be speeding Up fast when the battle starts on both players multiplayer online. it should be running normally when you play offline solo
i used the last PPSSPP build v1.10.3-712-g7ed1ade56 everything works fine but. the problem is the multiplayer when the battle begins speeding up.
Bleach Heat The Soul series (tested only on 7 though) got the same problem as its created by the same developer (Eighting).
Possibly affects other games that got developed by them.
It's strange for multiplayer to cause the game to renders at faster pace... i'm not familiar how PSP games normally do the rendering, may be these kind of games relies on networking to delays each frame? while the most common one on many games is using vsync to trigger the rendering.
For example, before the "Fight" initiated/begins the FPS/VPS is 30/30 on Bleach Heat the Soul 7, but after the "Fight" shows up it became 60/60 FPS/VPS which looks too fast.
PS: Bleach 7 only use 5 sceNet functions during the fight (which doesn't use callbacks) and they are with non-blocking flag arg being set (the right most arg):
sceNetAdhocGetPtpStat(09fbf9f8, 09713900)
sceNetAdhocPollSocket(09fbfa0c, 2, 0, 1)
sceNetAdhocPtpAccept(1, [09fbfa54]=00:00:00:00:00:00, [09fbfa5a]=0, 0, 1)
sceNetAdhocPtpRecv(2,08bbb520,09fbfa5c,0,1)
sceNetAdhocPtpSend(2,08bbc060,09fbfa5c,0,1)
PPS: Fate Unlimited Codes also using all 5 of these sceNet functions in similar way during the fight, probably because it was made by the same developer
Maybe callbacks are being registered and threads are being woken improperly or callbacks run on the wrong threads? But definitely could also be that our network syscalls are too "fast" in emulated time.
I'd recommend benchmarking common calls used many times per frame on a PSP and comparing the results to PPSSPP with the same PSP code. I've done this with a bunch of functions and it's both improved speed and timing.
-[Unknown]
Btw i often saw eatCycles in HLE function, but never saw one on sceNet, is this eatCycles suppose to put delays on the HLE?
may be some sceNet functions need this eatCycles too (but not sure how many ticks/cycles/usec)
i wonder if @adenovan ever benchmarks on sceNet functions, since he usually do auto-test on real PSP.
Both the Bleach games and Fate work properly on JPCSP so maybe investigate and compare with it.
https://youtube.com/watch?v=S6ERjsHDEnE&t=54s
Yes, hleDelayResult() will block the calling PSP thread for that long, but doesn't affect the "progression of time". The next thread will be scheduled immediately, as if the syscall took 0.00ms.
In contrast, hleEatCycles() (which can be used along with hleDelayResult()) consumes cycles, as if the syscall had to do a lot of work and took time. Since the PSP is single-core for user threads, no other thread can run during this time and so that time is "consumed" or "eaten" by the syscall. The next code (or thread, if rescheduled) will run after that progression of emulated time.
So for example, if you have PSP code like this (made up syscall):
while (!sceNetReadFromSockNonBlocking(sock, buf, 1)) {
continue;
}
Then, assuming it doesn't reschedule, you'll actually get really poor performance because only the CPU instructions will be consuming time. If the actual theoretical sceNetReadFromSockNonBlocking takes 0.013ms per call, it'd be much better to consume that time, making the tight loop move time forward faster (and ultimately using less total mhz from the phone/computer.)
But an issue like that would typically just cause bad performance and low fps, not raise FPS. It could still raise FPS if the game "knows" certain syscalls are expensive and assumes by calling a sequence of them, it'll already have lost 16ms so doesn't need to wait for that vsync.
-[Unknown]
Both the Bleach games and Fate work properly on JPCSP so maybe investigate and compare with it.
https://youtube.com/watch?v=S6ERjsHDEnE&t=54s
Based on this video those games really are supposed to run at 30 FPS, but when i checked all 5 sceNet functions on JPCSP that were used during the fight on Bleach 7, i couldn't find any kind of delaying mechanism on the non-blocking version
LOL Fate Unlimited Codes had it worse, even the seconds countdown timer is running too fast.. i thought timers like these are being calculated using SystemTime/RTC or something instead of accumulating elapsed time using the delay on each loop.
So may be this game is using this kinda loop:
countDown = 99000.0; // start at 99 sec
startTime = getSystemTimeMs();
while (fighting) {
SyncData();
// process data
Render();
SleepMs(33.333); // 30 FPS
countDown -= 33.333;
}
Btw, does anyone know how the 60 FPS cheat worked on most games? are these 60 FPS cheats also makes the time/timer running faster than normal?
I also remember that one of the Burnout games also had that issue (dont remember which maybe both).
@isseihyoudoux could you try the test build on PR https://github.com/hrydgard/ppsspp/pull/13519 to see if that PR really resolved this issue?
You will need to enable the workaround for now tho, as a proper fix hasn't been found yet.
I'm testing this build now and the same issue seems to be present. If ran with Force real clock sync off, it runs at around 40fps, but the animation/game speed still seems fast. With it enabled, it is still having the 60fps/overspeed issue.
EDIT: Found the network slowdown option. Tested that, getting 24fps. Now it's too slow heh. This was with clock sync on. With it disabled, the framerate dips lower.
Yes, the Force real clock sync will also have an effect of slowing down FPS, try disabling it when using slowdown network fps.
How about enabling slowdown network fps only? did you get 30 fps during battle?
The characters intro might get lower fps (ie. 24 fps) because it's just a workaround, but during battle the fps should be stable at 30 fps right?
Btw, are you playing over the internet? because it's also possible that the ping/latency could affects the fps too (ie. getting lower fps than normal).
x64
Nvidia 2080 Max-Q /Win10
DX11
Tested on same machine
Just did two fresh instances. I get 30fps at the start and for a round/for most of a around, but it will dip back down to 24fps between rounds at random and lock there. Will try another capable win10 machine to confirm.
EDIT: Same deal on my 970m/win10 laptop. Both should have specs to cover it, as it'll cap at 60 in adhoc normally and stay there for this game, for both of my test units. (It just runs in hyperspeed, as we know) I've now seen it lock between 24fps specifically on both machines. Clock Sync is not enabled, this is two instances on the same machine as well.
Ugh i guess this workaround aren't good, in my case i need to enable both force real clock sync and slowdown network fps in order to get 30 fps on both fate unimited codes and bleach heat the soul 7, without force real clock sync it goes back to 60 fps during battle :(
What's odd is, with this workaround, it will occasionally cap back at 30fps on the same stage even and stay that way for the round. Then, it will go back to 24fps. It is wildly inconsistent for me thus far. I keep hitting "retry" on the same match/stage, and a new framerate appears to lock each time.
I'm not sure if anyone else can replicate this behavior...maybe there is something else to enable
I just enabled "Force Real Clock Sync" during a set both instances (970m) with the fix enabled, and I'm getting a way more consistent 30fps in this same session. Hasn't dropped yet. So, it works somewhat at this time with these settings together?
If someone just made 60 fps patches for both games this would have allivated the issue a bit.
Unless its impossible.
Or, maybe toying with overclocking/downclocking the emulated CPU.
Specifically for Fates I would have just played the ps2 version instead because its 60 fps native there.
Just got done testing multiple stages and rounds with Force Clock and network slowdown on both machines. Can't replicate 30fps consistency on both sets of hardware, but sometimes multiple rounds stay synced to 30fps without issue.
Your mileage may vary. and yeah, it's unfortunate it's stuck this way. I also have the PS2 version ;>
Just tried with Fate but with downclocking the emulated CPU to 111 mhz both instances are running at 60 FPS at first but then reduce to around 30 fps afterwards in-battle.
Maybe reducing it further could make it consistenly 30 fps.
But yeah another workaround if you dont wanna use the hack I guess (it was disabled).
@somepunkid and @ANR2ME
EDIT : around 55 mhz seems alright on 1 stage that I tried.
I just tried setting both instances to Alternate speed 50% in the PPSSPP settings, and toggling it with the default key (`) after starting a match on both instances. With force clock sync enabled and alternate speed at 50%, it cuts frames from 60fps to 30fps and is playable without input lag, but audio will stutter. Best workaround I've tried for now, by far consistency-wise.
The network slowdown workaround may be more stable at random with sound, though. That's all I got for now :>
I'll remove that workaround since it's barely have any impact without force real clock sync :(
downclocking the emulated cpu is the quickest work around for it as real psp also suffer from same timing issues when cross play with psvita which is the better hardware.
this weird things on scenet adhoc library easily reproduce with bomberman games its only compatible between same cpu speed the cpu speed really affect the timing of packet sent in the timestamp and certain games has a range when its decide to drop the connection or not.
Burnout looks like normal in different playstation hardware.
Its only my guess if the game accumulated times , some faked timestamp on get peerlist should be removed and timestamp the packet when the data arrived.
During my investigation, the timestamps are only used as timeout calculation by comparing it with sceKernelGetSystemTimeWide, and due to unstable FPS on emulators(ie. during Loading scene, or using it on low end device), a slight difference could trigger the timeout, thus faking it can help prevent getting timeout unexpectedly.
However, there are games (ie. Falcom games) who can't deals with faked timeout that use the latest timestamp, because it's getting the value from sceKernelGetSystemTimeWide(current Timestamp) before getting GetPeerList(last recv Timestamp) so subtracting it resulting to minus and the game is comparing it with unsigned timeout number thus thinking it's getting timeout due to negative number ended became very large number on unsigned.
Some games relies on incoming data when deciding whether to render new frame not, the FPS/VPS is affected by the more data you receive than normal (ie. faster device flooding slower device buffer)
For example, on Warriors Orochi 2 that suffers very low FPS/VPS recently, normally this game only sync (send and receive) once per frame, but during unstable FPS there could be more than 1 packets arrived and causing FPS/VPS drop for each additional packets being received on a single sync attempt, lowering the buffer size prevent causing the FPS/VPS to get too low with side effects of stutters due to dropped packets when buffer is full.
There are also games where the more often you send & recv the faster the FPS/VPS will be (i forgot which games, probably BattleZone), on BattleZone the sync interval for GameMode used by JPCSP is 12ms, but i can only get 50-ish FPS with that interval, after lowering the interval to 10ms i can get 60 FPS (which is the normal FPS based on the ticking clock on that game).
Regarding Fate Unlimited Codes, when i compared the logs during 30 FPS and 60 FPS, there is 1 or 2 additional AdhocPollSocket while waiting for incoming data, where each AdhocPollSocket call have about 6~9ms interval. (i forgot which one have more AdhocPollSocket)
did the game log on ptp flush method ? many of game that use matching context is more broken beside of adhoc games written in old tines using pdp and ptp library only , that area need more to be explored if this game call it also , if there is a complete log i did not mind to dig the bug on real console with auto test just write what function you need the info for.
This game doesn't use PtpFlush during battle (which shows the 60 FPS issue), verbose logs on SCENET only logs these 5 functions:
sceNetAdhocGetPtpStat(09fbf9f8, 09713900)
sceNetAdhocPollSocket(09fbfa0c, 2, 0, 1)
sceNetAdhocPtpAccept(1, [09fbfa54]=00:00:00:00:00:00, [09fbfa5a]=0, 0, 1)
sceNetAdhocPtpRecv(2,08bbb520,09fbfa5c,0,1)
sceNetAdhocPtpSend(2,08bbc060,09fbfa5c,0,1)
Regarding AdhocMatching, it's only used to create lobby/matchmaking, after all players ready and the game started, it's no longer used, and the game will use regular adhoc functions during game play (except GameMode which have it's own synchronization method):
PdpCreate, PtpListen, PtpAccept, PtpOpen, PtpConnect to create/open a socket and establish connection
Send/Recv to send/recv data
PtpFlush to make sure all data in send buffer are sent before waiting for a reply (due to nagle algorithm)
PtpClose/PdpDelete to close the socket
GetPeerList/GetPeerInfo to detects timeout & disconnected players
GetPdpStat/GetPtpStat to detects available data in recv buffer, also to check for PTP connection state.
PollSocket to check whether a socket can send, recv, connect, accept, etc.
PS: It would be nice if you can benchmark all of these functions so we can get a bit more accurate hleEatCycle/hleEatMicro :)
roger that will do , its like a same dissidia desync issues under my investigation , will open the result after rebasing into current adhoc code
btw @adenovan do you have a simple homebrew source which can be used for Adhocctl?
i wanted to test something but i don't know how to initialize adhoc and adhocctl, most of the sample files from the of unofficial pspsdk i have doesn't contains adhoc example.
it would be nice if you have sample for AdhocMatching too :)
PS: i'm using the Minimalist PSPSDK which is kinda old https://sourceforge.net/projects/minpspw/, because i couldn't figured out how to setup the latest PSPSDK on Windows (it mentions pspsdk-1.0-win32.zip but i never found this kind of file)
@ANR2ME its on my repo psp auto test , min pspw works best for windows but you need to disable driver signature to get the full access into the shell.
No adhoc matching tdd yet beside of that adhocctl is still very manual trigger with button to test certain feature. its pretty fragile often scewlandriver crash as we using the api in unknown rule of the psp adhoc api manual , i like to trigger some driver crash so we know the exact things how its supposed to not work first and grab the full stack trace how it should not be handled that way.
Auto test has some timeout as it automated i often do write the hombrew manually just to verify the behavior of low level scenetlibrary under
I think something really is waking up thread unexpectedly (or is this situation normal?)
Based on the Debug Logs on All channels (only SCENET in Verbose Log) of Space Invader Extreme (this game uses blocking PtpSend, PtpFlush, PtpRecv with infinite timeout 0 during game play)

At timestamp 54:13:101 Thread "SIE" supposed to be Blocked by PtpRecv (which switches to "user_main" thread) and Resumed with "Returning" at 54:13:103 after receiving data.
But, for some reason, not long after "SIE" was in waiting state, sceKernelSignalSema is waking up "SIE" thread which switch from "user_main" to "SIE" O.o Is this normal? or Did i put current thread to wait in a wrong way?
PtpRecv is blocking the thread using this code: (uid = socketId = 4068, threadSocketId = ((u64)__KernelGetCurThread()) << 32 | socketId)
int WaitBlockingAdhocSocket(...) {
int uid = (int)(threadSocketId & 0xFFFFFFFF);
if (adhocSocketRequests.find(threadSocketId) != adhocSocketRequests.end()) {
WARN_LOG(SCENET, "sceNetAdhoc[%d] - ThreadID[%d] WaitID[%d] already existed, Socket[%d] is busy!", type, (threadSocketId >> 32), uid, pspSocketId);
// FIXME: Not sure if Adhoc Socket can return ADHOC_BUSY or not (assuming it's similar to EINPROGRESS for Adhoc Socket), or may be we should return TIMEOUT instead?
return ERROR_NET_ADHOC_BUSY; // ERROR_NET_ADHOC_TIMEOUT
}
...
CoreTiming::ScheduleEvent(usToCycles(100), adhocSocketNotifyEvent, threadSocketId);
__KernelWaitCurThread(WAITTYPE_NET, uid, 0, 0, false, reason);
// Fallback return value
return ERROR_NET_ADHOC_TIMEOUT;
}
...
static int sceNetAdhocPtpRecv(...){
...
u64 threadSocketId = ((u64)__KernelGetCurThread()) << 32 | ptpsocket.id;
return hleLogError(SCENET, WaitBlockingAdhocSocket(threadSocketId, PTP_RECV, id, buf, len, timeout, nullptr, nullptr, "ptp recv"), "Blocking thread(%d) for PtpRecv", threadSocketId >> 32);
...
}
And Resumed with this code:
static void __AdhocSocketNotify(u64 userdata, int cyclesLate) {
...
__KernelResumeThreadFromWait(threadID, result);
DEBUG_LOG(SCENET, "Returning (ThreadId: %d, WaitID: %d, error: %d) Result (%08x) of sceNetAdhoc[%d] - SocketID: %d", threadID, waitID, error, (int)result, req.type, req.id);
// We are done with this socket
adhocSocketRequests.erase(userdata);
}
Could someone explain why the "SIE" thread woken up even before the Scheduled Event waking it up by "Returning" a return value?
And how do i prevent the thread from resuming (which most-likely will use another blocking sceNet function) while a blocking socket is still in progress? May be preventing the game to switch to a thread with custom-defined WaitType?
Edit: Oops looks like there are 2 threads with "SIE" name, where the blocking PtpRecv is on one SIE thread (296) while the blocking PtpSend is running on another SIE thread (294), so all is well :)