When a replay job is interrupted, the Replayer reports multiple statistics about how many envelopes were delivered, were marked offline, or were errored. These numbers may be inaccurate, because in some circumstances the Replayer may still be attempting to deliver envelopes, even after the statistics are shown.
When a replay job is interrupted, the Replayer automatically deletes the RabbitMQ queues that it created. In some circumstances (i.e. when the Replayer is still attempting to deliver envelopes after the interruption) all the queues might not be deleted. The queues can be manually deleted via the RabbitMQ management console as described in the Data Hub User Guide.
When a replay job has a destination that can't accept envelopes as quickly as they can be queried by the Replayer (i.e. the destination is throttled more than the Replayer is), the Replayer will stash queried envelopes in a RabbitMQ queue. If the size of the Repository database is large enough, and the rate differential between the Replayer and the destination is large enough, the queue size can grow to the point where RabbitMQ will run into its own internal memory limits, and prevent new envelopes from being queued.
This typically only happens when there are a few million or more envelopes queued.
If this happens, it may resolve itself, as envelopes are pulled off the queue and delivered to the destination. If it does not resolve itself, though, you may need to cancel the replay and try it again with different (i.e. lower) concurrency
settings. Specifically, try reducing the
MaxEndpointTasks settings, in the config file.