I'm travelling at the moment, so don't really have too much time to research on somethin new, but i did come across something simple yet quite an useful algorithm 2 weeks ago. Let's talk about that.
Imagine a 2d space with a circle acting as an oracle. Initially, random points are thrown into this space, but the twist is that these points must learn to only spawn within the circle over time.
Now instead of randomly throwing points forever, we can guide em using a probabilistic rule.
- A new, randomly proposed position is accepted if it lies within the circle.
- If it's outside, it still has a small chance of being accepted, making the process more stocahastic.
But you might be thinking, "If I already know the circle's shape, why not just use it directly instead of this indirect oracle approach?"
Fair question. This is just a simple example. In real-world scenarios, you often deal with oracles where you cant directly access the full picture - you would only get like a probabilistic hint about what's inside or outside. You might not know the exact shape, but you can still sample from it using clever algorithms like the Metropolis-Hastings algorithm.
And here's how it goes:
1. Choose an initial state \( x_0 \) from the state space \( \mathcal{X} \).
2. At each step \( t \), propose a new state \( x' \) based on the current state \( x_t \) using a proposal distribution \( q(x' \mid x_t) \).
3. Accept the proposed move with probability \( P \) given by:
\[
P = \min\left(1, \frac{\pi(x') \, q(x_t \mid x')}{\pi(x_t) \, q(x' \mid x_t)}\right)
\]
Where \( \pi(x) \) is the target distribution and \( q(x' \mid x) \) is the proposal distribution.
4. If the move is accepted, set:
\( x_{t+1} = x' \)
otherwise:
\( x_{t+1} = x_t \)
5. Iterate this process for a fixed number of steps or until convergence.
And yes, I like Japan.
You can apply this in a ton of places, like image processing, probabilistic models, code analysis...
Any thoughts? DM me - @pwnfunction.