SPARQLRepository blocks indefinitely when there are no connections available in the pool

Description

Hi all,

I've run into an issue with SPARQLRepository, and its default connection pool settings, which cause it to block queries indefinitely waiting for a connection in the pool to become available.

Whilst this behaviour may be reasonable in many environments, we're using the SPARQLRepository as a client in a web service. In this situation it is often preferable to drop the request with an error, rather than accept more connections, which usually just causes more and more clients to block indefinitely waiting on potentially long running requests to complete.

I believe the underlying apache HTTP client library supports this behaviour through setting a "connection request timeout":

https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html#getConnectTimeout%28%29

Unfortunately I haven't figured out how to set this through sesame, and it appears that unlike the pool size this property isn't configurable via a system property.

So I guess there are three things I'd like to raise:

1) To see if it is possible to somehow work around this and configure this in sesame (I've tried sub-classing SPARQLRepository to hook into the construction of the HttpClient; however for some reason my overrides are being ignored.

2) To raise the idea of making it more easily configurable in sesame.

3) To raise the question of changing the default value to raise an exception when the pool is full, and allow configuring it. I believe this behaviour would be preferable as it would fail fast when the pools limit is exceeded, and the exception could even suggest a configuration setting to switch to the blocking behaviour. The current blocking behaviour means developers may be unaware that they causing large bottlenecks.

Thanks again.

Environment

None

Assignee

Jeen Broekstra

Reporter

Rick Moynihan

Labels

Components

Affects versions

Priority

Major
Configure