It’s a matter of definitions. Vsync means that the enqueued spare buffer and the screen buffer are only exchanged during the “vertical retrace” when a whole screen buffer has been presented to the screen, thereby preventing the “tearing” that happens when half of the screen is from one frame and the other half comes from a different frame. It is the act of exchanging buffers which is synchronized, not the rendering in the game. Dropping whole frames does not cause tearing, so it is compatible with vsync as defined above. Another definition of vsync is basically “as above, but no dropped frames allowed”, in which case dropping frames is obviously not allowed.
Worth chiming in - that is incorrect not actually true. Vsync - in the context of graphics APIs - typically means that calls to present buffers are synchronized with the display presentation for timing purposes. In that context, silently dropping frames and allowing rendering to continue is a blatant violation of the API specification and will break applications that rely on that behavior. As a result, typically the method of triple buffering used is to simply treat the swap chain as a FIFO queue that is at max three long, and to simply block on submission if the queue is already full. This satisfies the behavioral requirements of the vsync APIs, while allowing buffering of additional frames.
To get more into the nitty gritty with an actual example - OpenGL uses the swap interval concept that specifies the minimum number of v-blanks that can occur between buffer swaps. In that case, the expected behavior when you call SwapBuffers with SwapInterval at a non-zero value is to block and wait until at least v-blanks has passed since the last SwapBuffers call. As a result, triple buffering as you describe it is not possible when you have a non-zero swap interval.
With all that said, there are ways of getting the behavior you described with OpenGL if you have a compositor that behaves like this. I might be mistaken about this, but AFAIK this is how OS X and iOS behave when you have vsync disabled - but it?s because the compositor controls the ?front buffer? and not your application, and they (if I remember right) always synchronize that buffer swap. I don?t know about Android, but it?s possible it does as well. The big question with compositing window managers is if they force synchronization during the compositing swap or not - if they do, you have to implement correct triple buffering in order not to block rendering during presents. If they don?t - because, for instance, an application has enabled adaptive v-sync, they typically fall back to double buffering and more traditional blocking behavior. It?s a crap shoot and you can?t really rely on the behavior to be consistent across platforms.
Direct3D apps can specify this behavior when creating their swap chain, and it is possible for the GL ICD on Windows to implement this behavior using DXGI, but again - there is no way to tell the ICD vendor that this is your desired behavior without using proprietary APIs or their driver control panel.
IIRC, Vulkan doesn?t support this feature yet but there is a proposal out there that will hopefully allow it.
But yes. Keep in mind the entire purpose of vsync is to allow the application to synchronize with the presentation on the display, and in that context the method of triple buffering you describe is mutually exclusive with an application requesting vsync to be enabled.
-Luke