Local Order Book Tutorial Part 3: Keeping the WebSocket Connection

Local Order Book Tutorial Part 3: Keeping the WebSocket Connection

Intermediate
Đã cập nhật Nov 28, 2025
6m

Key Takeaways

  • Learn to identify and discard stale buffered events that could corrupt your local order book, ensuring only valid, up-to-date changes are processed.

  • Discover how to reset your local order book using a fresh snapshot, then apply buffered and real-time updates in the correct sequence to maintain accuracy.

  • Understand critical safeguards like detecting missed updates, restarting sync when needed, and ensuring your local book mirrors the live market.

Binance Academy courses banner

Introduction

Keeping a local order book consistently up to date is a critical step in maintaining its reliability and accuracy.

In the previous articles of this series, we covered the creation of a WebSocket stream and event buffering, and how to retrieve an order book and process Depth stream events based on snapshots.

This final part will walk through the process of discarding outdated buffered events, setting the local order book to the latest snapshot, and applying both buffered and new incoming updates to keep the book fully synchronized with the live market.

Discard Events in Buffered Event

Once the snapshot has been successfully retrieved, the system should compare it with the list of buffered depth update events received before the snapshot was obtained. These buffered events represent order book updates that occurred while the snapshot request was being processed.

Because the market can move extremely quickly, some of the buffered updates may already be obsolete by the time the snapshot is received. Keeping these outdated events could corrupt the local order book. 

To ensure accuracy, each buffered event should be checked against the last update ID from the snapshot. The goal is to remove all events where the event’s u (its final update ID) is less than or equal to the snapshot’s lastUpdateId.

Only the remaining events, those that occurred after the snapshot, should be kept. The first valid buffered event should have its lastUpdateId value within the snapshot’s [U;u] range, indicating that it can be safely applied next.

This filtering step ensures that no outdated information remains in memory and that all subsequent updates will be applied in the correct order. Without it, the local order book might include stale bid or ask prices, misleading the trading logic about available liquidity or best execution prices.

Example of code to discard old event ID:

Code Snippet
buffer_copy = copy.copy(buffer)

filtered = [ev for ev in buffer_copy if ev.u > snapshot_last_id]

if not filtered:
    logging.warning(
        "All buffered events are older than snapshot lastUpdateId; waiting for fresh diffs..."
    )
    continue

When this step is complete, the local order book can be safely set to the state represented by the snapshot.

Set Local Order Book to the Snapshot

After discarding outdated buffered events, the next step is to replace the contents of the local order book with the data from the snapshot. The snapshot represents the most recent, confirmed state of the market at a specific point in time.
By setting the local order book to the snapshot, the system ensures that all current bid and ask levels match exactly what the exchange reported.

This operation should include updating both sides of the book, bids and asks, and assigning the update ID of the snapshot as the current local update ID. This value acts as the synchronization anchor for all future updates. Any incoming event will be compared against this value to determine whether it should be applied or ignored.

Example of code to set the local order book: 

Code Snippet
order_book = {
    "bids": dict(snapshot["bids"]),
    "asks": dict(snapshot["asks"]),
}
local_update_id = snapshot_last_id

Setting the order book to the snapshot is a crucial step because it resets any inconsistencies that may have accumulated in the local copy. Even minor mismatches between the local and exchange order books can grow over time, especially in high-frequency markets where updates occur hundreds of times per second.

Apply Events to Local Order Book

Once the local order book has been set to the snapshot, the buffered events that were retained after filtering can now be applied sequentially. Each event represents one or more changes in the market’s bid and ask levels.

The application process follows a strict procedure to maintain consistency and avoid applying invalid or out-of-sequence updates:

Ignore outdated events

If the event’s u (its last update ID) is less than the local order book’s update ID, the event is obsolete and should be ignored. Applying it would revert the order book to a previous state, which must be avoided.

Restart if sequence is broken

If the event’s U (its first update ID) is greater than the local order book’s update ID, it means that one or more updates were missed. In that case, the local order book can no longer be trusted. The correct approach is to discard it and restart the synchronization process from the beginning by fetching a new snapshot.

Update each price level

For every price level in the event’s bid (b) and ask (a) arrays:

  • If the price level does not exist in the current order book, it should be inserted with the given quantity.

  • If the updated quantity is zero, that price level should be removed from the order book entirely.

Set the new update ID

After applying the event successfully, the order book’s update ID should be set to the event’s final update ID (u). This ensures that the local order book reflects all changes up to that specific event.

Example of how to set up this procedure:

Code Snippet
def apply_buffered_events(order_book, buffer, local_update_id):
    applied = 0

    for event in buffer:
        if event.u < local_update_id:
            continue

        if event.U > local_update_id + 1:
            logging.warning(
                f"Gap detected between events (event.U={event.U}, local_update_id={local_update_id}). "
                "Resync required."
            )
            return local_update_id, False

        apply_update(order_book, event)
        local_update_id = event.u
        applied += 1

    logging.info(
        f"Applied {applied} buffered events. Local order book now synced to {local_update_id}."
    )
    return local_update_id, True

Once all buffered events have been processed in this way, the system can continue applying real-time events as they arrive from the exchange’s data stream.

From that point onward, the local order book should remain fully synchronized, provided that no updates are skipped and network latency remains manageable.

Final Code

The process of building a local order book is now complete. The resulting code should look as follows:

Code Snippet
import asyncio
import logging
import copy
from binance_sdk_spot.spot import ConfigurationWebSocketStreams, Spot, SPOT_WS_STREAMS_PROD_URL

logging.basicConfig(level=logging.INFO)

configuration_ws_streams = ConfigurationWebSocketStreams(
    stream_url=SPOT_WS_STREAMS_PROD_URL
)

client = Spot(config_ws_streams=configuration_ws_streams)


async def buffer_events(buffer: list):
    """
    Step 1 & 2 — Open WebSocket connection and buffer events
    """
    connection = None
    try:
        connection = await client.websocket_streams.create_connection()
        stream = await connection.diff_book_depth(symbol="bnbusdt")

        stream.on("message", lambda data: {buffer.append(data)})
        await asyncio.Future()  # keep alive
    except asyncio.CancelledError:
        pass
    except Exception as e:
        logging.error(f"buffer_events() error: {e}")
    finally:
        if connection:
            await connection.close_connection(close_session=True)


async def depth_snapshot():
    """
    Step 3 — Get the initial depth snapshot
    """
    response = client.rest_api.depth(symbol="BNBUSDT", limit=1000)
    data = response.data()
    await asyncio.sleep(1)
    return {
        "last_update_id": data.last_update_id,
        "bids": {price: qty for price, qty in data.bids},
        "asks": {price: qty for price, qty in data.asks},
    }


def apply_update(order_book, update):
    for price, qty in update.b:
        if float(qty) == 0.0:
            order_book["bids"].pop(price, None)
        else:
            order_book["bids"][price] = qty

    for price, qty in update.a:
        if float(qty) == 0.0:
            order_book["asks"].pop(price, None)
        else:
            order_book["asks"][price] = qty


def apply_buffered_events(order_book, buffer, local_update_id):
    """
    Step 7: apply buffered events sequentially
    """
    applied = 0

    for event in buffer:
        if event.u < local_update_id:
            continue

        if event.U > local_update_id + 1:
            logging.warning(
                f"Gap detected between events (event.U={event.U}, local_update_id={local_update_id}). "
                "Resync required."
            )
            return local_update_id, False

        apply_update(order_book, event)
        local_update_id = event.u
        applied += 1

    logging.info(
        f"Applied {applied} buffered events. Local order book now synced to {local_update_id}."
    )
    return local_update_id, True

async def local_order_book():
    buffer = []
    task = asyncio.create_task(buffer_events(buffer))

    try:
        while True:
            await asyncio.sleep(3)

            if not buffer:
                logging.warning("No depth updates received yet, skipping this cycle.")
                continue

            # Step 3 again: get a new snapshot
            snapshot = await depth_snapshot()
            snapshot_last_id = snapshot["last_update_id"]

            buffer_copy = copy.copy(buffer)

            # Step 5: discard outdated events (u <= lastUpdateId)
            filtered = [ev for ev in buffer_copy if ev.u > snapshot_last_id]

            if not filtered:
                logging.warning(
                    "All buffered events are older than snapshot lastUpdateId; waiting for fresh diffs..."
                )
                continue

            # Step 4: verify snapshot vs first buffered event
            first = filtered[0]
            if not (first.U <= snapshot_last_id + 1 <= first.u):
                logging.warning(
                    "Snapshot is inconsistent with buffered updates "
                    f"(snapshot_last_id={snapshot_last_id}, first.U={first.U}, first.u={first.u}). Restarting sync..."
                )

                task.cancel()
                try:
                    await task
                except asyncio.CancelledError:
                    pass

                buffer = []
                task = asyncio.create_task(buffer_events(buffer))

                for _ in range(10):
                    await asyncio.sleep(1)
                    if buffer:
                        break
                else:
                    logging.warning("New buffer still empty; retrying...")
                    continue

                continue

            # Step 6: set local order book to snapshot
            order_book = {
                "bids": dict(snapshot["bids"]),
                "asks": dict(snapshot["asks"]),
            }
            local_update_id = snapshot_last_id

            # Step 7: apply buffered events sequentially
            local_update_id, success = apply_buffered_events(order_book, buffer, local_update_id)

            if not success:
                logging.warning("Local order book desynced. Restarting synchronization.")
                continue

            buffer[:] = [ev for ev in buffer if ev.u > local_update_id]
            logging.info(
                f"Local book synced: {len(order_book['bids'])} bids / {len(order_book['asks'])} asks."
            )

            await asyncio.sleep(1)

    finally:
        task.cancel()
        try:
            await task
        except asyncio.CancelledError:
            pass

if __name__ == "__main__":
    asyncio.run(local_order_book())

Closing Thoughts

Building and maintaining a local order book is a process that demands precision, patience, and a clear understanding of how live market data works. Across these three articles, the complete workflow has been outlined: from establishing a real-time connection and buffering incoming updates, to retrieving a reliable snapshot, and finally, keeping the order book synchronized through an orderly update process.

Together, these steps form a complete framework for managing a local order book in real time. Mastering this workflow gives traders and developers not just a technical advantage, but also a strategic one: faster insights, more reliable analytics, and a stronger foundation for automated trading systems. Whether used for backtesting, market-making, or algorithmic execution, maintaining the local order book correctly allows streaming market data to be converted into precise, actionable information, keeping the system fully synchronized with live market conditions.

Further Reading

Disclaimer: This content is presented to you on an “as is” basis for general information and educational purposes only, without representation or warranty of any kind. It should not be construed as financial, legal or other professional advice, nor is it intended to recommend the purchase of any specific product or service. You should seek your own advice from appropriate professional advisors. Where the article is contributed by a third party contributor, please note that those views expressed belong to the third party contributor, and do not necessarily reflect those of Binance Academy. Please read our full disclaimer here for further details. Digital asset prices can be volatile. The value of your investment may go down or up and you may not get back the amount invested. You are solely responsible for your investment decisions and Binance Academy is not liable for any losses you may incur. This material should not be construed as financial, legal or other professional advice. For more information, see our Terms of Use and Risk Warning.