问题描述
有人在实施SQL Connection池中有限资源的有限资源的共享对象池策略方面是否有良好的资源? (即将充分实施它是线程安全的).
要跟进@aaronaught请求澄清的请求,池的使用将用于对外部服务的负载平衡请求.将其放在一个情况下,与我的直接局面相比,可能会更容易立即理解.我有一个会话对象,该对象的功能与NHIBERNATE的ISession对象类似.每个唯一的会话都会管理其连接到数据库.目前,我有1个长期运行的会话对象,并且正在遇到我的服务提供商限制我对这个单个会话的使用率的问题.
由于他们缺乏对单个会话的期望,将被视为一个长期运行的服务帐户,因此显然将其视为正在锤击其服务的客户.这使我在这里提出了问题,而不是有1个单独的会话,我会创建一个不同的会话池,并将请求分开到这些多个会话中,而不是像我之前所做的那样创建一个焦点.
希望背景有一些价值,但直接回答您的一些问题:
Q:创建对象是否昂贵?
a:没有对象是有限资源的池
Q:会很频繁地获取/释放它们吗?
是的
Q:一个简单的先到先决条件就足够了,或者您需要更聪明的东西,即可以防止饥饿吗?
a:一个简单的循环类型分布就足够了,饥饿,我认为您的意思是,如果没有可用的会话,呼叫者会被阻止等待发布.这不是真正适用的,因为会话可以由不同的呼叫者共享.我的目标是在多个会话中分发使用情况,而不是单个会话.
我相信这可能与对象池的正常使用是有分歧的,这就是为什么我最初遗漏了这一部分,并计划适应该模式以允许对象共享,而不是允许饥饿的情况发生.
Q:优先事项,懒惰与急切的加载等.
a:不涉及优先级,为简单起见,只要假设我会在创建池本身的创建中创建可用对象的池.
推荐答案
.net core
中的对象池dotnet core 具有添加到基类库(BCL)的对象池的实现.您可以阅读原始github问题在这里并查看" noreferrer"> system.buffers.buffers .当前,ArrayPool是唯一可用的类型,用于池数组.有一篇不错的博客文章在这里.
namespace System.Buffers { public abstract class ArrayPool<T> { public static ArrayPool<T> Shared { get; internal set; } public static ArrayPool<T> Create(int maxBufferSize = <number>, int numberOfBuffers = <number>); public T[] Rent(int size); public T[] Enlarge(T[] buffer, int newSize, bool clearBuffer = false); public void Return(T[] buffer, bool clearBuffer = false); } }
可以在ASP.NET核心中看到其用法的一个示例.由于它在dotnet core bcl中,因此ASP.NET Core可以与其他对象共享其对象池,例如newtonsoft.json的JSON Serialializer.您可以阅读此博客文章有关newtonsoft.json的更多信息.
Microsoft Roslyn C#编译器中的对象合并
新的Microsoft Roslyn C#编译器包含为什么在Roslyn中有如此多的对象池实现?).
1-
// Example 1 - No using statement so you need to be sure no exceptions are thrown.
StringBuilder stringBuilder= StringBuilderPool.Allocate();
// Do something with stringBuilder
StringBuilderPool.Free(stringBuilder);
// Example 2 - Safer version of Example 1.
StringBuilder stringBuilder= StringBuilderPool.Allocate();
try
{
// Do something with stringBuilder
}
finally
{
StringBuilderPool.Free(stringBuilder);
}
- 这些直接使用objectpool并直接使用对象的池,并具有一个独立的对象池.存储一个由128个对象的池.
// Example 1 PooledHashSet<Foo> hashSet = PooledHashSet<Foo>.GetInstance() // Do something with hashSet. hashSet.Free(); // Example 2 - Safer version of Example 1. PooledHashSet<Foo> hashSet = PooledHashSet<Foo>.GetInstance() try { // Do something with hashSet. } finally { hashSet.Free(); }
microsoft.io.recyclablememorystream
此库为MemoryStream对象提供池.这是System.IO.MemoryStream的替换.它具有完全相同的语义.它是由Bing工程师设计的.阅读博客文章在这里 github .
var sourceBuffer = new byte[]{0,1,2,3,4,5,6,7}; var manager = new RecyclableMemoryStreamManager(); using (var stream = manager.GetStream()) { stream.Write(sourceBuffer, 0, sourceBuffer.Length); }
请注意,RecyclableMemoryStreamManager应声明一次,并且将在整个过程中生存 - 这是池.如果您愿意,使用多个池是完全可以的.
其他推荐答案
这个问题比由于几个未知数所产生的预期要棘手:汇总资源的行为,对象的预期/必需寿命,需要池的真正原因等等.通常是特殊用途 - 线程池,连接池等 - 因为当您确切地知道资源的作用时,更容易优化一个,并且更重要的是在如何实现该资源方面具有 Control . p>
由于这并不那么简单,所以我尝试做的是提供一种相当灵活的方法,您可以尝试一下,看看最有效的方法. 对长期帖子提前表示歉意,但是在实施一个不错的通用资源池时,有很多理由要涵盖.我真的只是在刮擦表面.
通用池必须具有一些主要的"设置",包括:
- 资源加载策略 - 急切或懒惰;
- 资源加载机制 - 如何实际构造一个;
- 访问策略 - 您提到的"循环"并不像听起来那么简单;该实现可以使用类似的圆形缓冲区,但不是完美的,因为池无法控制资源何时实际收回.其他选择是FIFO和LIFO; FIFO将具有更多的随机访问模式,但是LIFO使实施最少使用的释放策略变得更加容易(您说这是不范围的,但仍然值得一提). >
对于资源加载机制,.NET已经为我们提供了一个干净的抽象 - 委托.
private Func<Pool<T>, T> factory;
将其通过池的构造函数,我们要完成.使用具有new()约束的通用类型也有效,但这更灵活.
其他两个参数的
,访问策略是更复杂的野兽,因此我的方法是使用基于继承(接口)的方法:
public class Pool<T> : IDisposable { // Other code - we'll come back to this interface IItemStore { T Fetch(); void Store(T item); int Count { get; } } }
这里的概念很简单 - 我们将让public Pool类处理诸如线程安全的常见问题,但对于每个访问模式,请使用其他"项目存储". Lifo很容易由堆栈表示,FIFO是队列,我使用了使用List<T>和索引指针的不太优化但易于启动的圆形缓冲区实现来近似圆形旋转访问模式.
以下所有类都是Pool<T>的内部类 - 这是一种样式选择,但是由于这些实际上并不是要在Pool之外使用,因此最有意义.
class QueueStore : Queue<T>, IItemStore { public QueueStore(int capacity) : base(capacity) { } public T Fetch() { return Dequeue(); } public void Store(T item) { Enqueue(item); } } class StackStore : Stack<T>, IItemStore { public StackStore(int capacity) : base(capacity) { } public T Fetch() { return Pop(); } public void Store(T item) { Push(item); } }
这些是显而易见的 - 堆栈和队列.我认为他们真的没有太多的解释.圆形缓冲液更为复杂:
class CircularStore : IItemStore { private List<Slot> slots; private int freeSlotCount; private int position = -1; public CircularStore(int capacity) { slots = new List<Slot>(capacity); } public T Fetch() { if (Count == 0) throw new InvalidOperationException("The buffer is empty."); int startPosition = position; do { Advance(); Slot slot = slots[position]; if (!slot.IsInUse) { slot.IsInUse = true; --freeSlotCount; return slot.Item; } } while (startPosition != position); throw new InvalidOperationException("No free slots."); } public void Store(T item) { Slot slot = slots.Find(s => object.Equals(s.Item, item)); if (slot == null) { slot = new Slot(item); slots.Add(slot); } slot.IsInUse = false; ++freeSlotCount; } public int Count { get { return freeSlotCount; } } private void Advance() { position = (position + 1) % slots.Count; } class Slot { public Slot(T item) { this.Item = item; } public T Item { get; private set; } public bool IsInUse { get; set; } } }
我本可以选择多种不同的方法,但最重要的是,应该以与它们创建的顺序相同的顺序访问资源,这意味着我们必须维持对它们的参考,但将其标记为"使用中" (或不).在最坏的情况下,只有一个插槽可用,并且每次获取都需要完全迭代缓冲区.如果您有数百种资源,并且每秒收购并释放了几次资源,那将是不好的;对于5-10个项目的池而言,这并不是一个问题,在典型的情况下,在资源被轻微使用的情况下,它只需要提高一个或两个插槽.
记住,这些类是私人内部类 - 这就是为什么它们不需要大量错误检查,池本身限制了对它们的访问.
投入枚举和工厂方法,我们已经完成了这一部分:
// Outside the pool public enum AccessMode { FIFO, LIFO, Circular }; private IItemStore itemStore; // Inside the Pool private IItemStore CreateItemStore(AccessMode mode, int capacity) { switch (mode) { case AccessMode.FIFO: return new QueueStore(capacity); case AccessMode.LIFO: return new StackStore(capacity); default: Debug.Assert(mode == AccessMode.Circular, "Invalid AccessMode in CreateItemStore"); return new CircularStore(capacity); } }
下一个要解决的问题是加载策略.我已经定义了三种类型:
public enum LoadingMode { Eager, Lazy, LazyExpanding };
前两个应该是自我解释的;第三个是一种混合动力,它会懒惰加载资源,但实际上并未开始重新使用任何资源,直到池已满.如果您希望游泳池饱满(听起来像您这样做),这将是一个很好的权衡,但是想推迟将它们实际创建为首次访问的费用(即改善启动时间).
现在我们有项目商店抽象,加载方法确实不太复杂:
private int size; private int count; private T AcquireEager() { lock (itemStore) { return itemStore.Fetch(); } } private T AcquireLazy() { lock (itemStore) { if (itemStore.Count > 0) { return itemStore.Fetch(); } } Interlocked.Increment(ref count); return factory(this); } private T AcquireLazyExpanding() { bool shouldExpand = false; if (count < size) { int newCount = Interlocked.Increment(ref count); if (newCount <= size) { shouldExpand = true; } else { // Another thread took the last spot - use the store instead Interlocked.Decrement(ref count); } } if (shouldExpand) { return factory(this); } else { lock (itemStore) { return itemStore.Fetch(); } } } private void PreloadItems() { for (int i = 0; i < size; i++) { T item = factory(this); itemStore.Store(item); } count = size; }
上面的size和count字段分别指池的最大大小和池拥有的资源总数(但不一定是可用的). AcquireEager是最简单的,它假设一个项目已经在商店中 - 这些项目将在构造时预加载,即在最后显示的PreloadItems中.
AcquireLazy检查池中是否有免费项目,如果没有,则会创建一个新项目. AcquireLazyExpanding只要池尚未达到目标大小,就会创建一个新资源.我试图优化它以最大程度地减少锁定,希望我没有犯任何错误(i 在多线程条件下对此进行了测试,但显然没有详尽).
.您可能想知道为什么这些方法都没有麻烦检查商店是否达到最大尺寸.我会稍后做到这一点.
现在为游泳池本身.这是一组完整的私人数据,其中一些已显示:
private bool isDisposed; private Func<Pool<T>, T> factory; private LoadingMode loadingMode; private IItemStore itemStore; private int size; private int count; private Semaphore sync;
回答我在上一段中掩盖的问题 - 如何确保我们限制创建的资源总数 - 事实证明,.NET已经有一个完美的工具,称为 Smemaphore ,它专门设计用于允许固定数量的线程访问资源(在这种情况下,"资源"是内部项目存储).由于我们没有实施全面的生产商/消费者队列,因此这完全适合我们的需求.
构造函数看起来像这样:
public Pool(int size, Func<Pool<T>, T> factory, LoadingMode loadingMode, AccessMode accessMode) { if (size <= 0) throw new ArgumentOutOfRangeException("size", size, "Argument 'size' must be greater than zero."); if (factory == null) throw new ArgumentNullException("factory"); this.size = size; this.factory = factory; sync = new Semaphore(size, size); this.loadingMode = loadingMode; this.itemStore = CreateItemStore(accessMode, size); if (loadingMode == LoadingMode.Eager) { PreloadItems(); } }
这里不应该感到惊讶.唯一要注意的是使用PreloadItems的特殊定量,使用PreloadItems的方法.
由于现在几乎所有内容都已经清楚地抽象了,因此实际的Acquire和Release方法确实非常简单:
public T Acquire() { sync.WaitOne(); switch (loadingMode) { case LoadingMode.Eager: return AcquireEager(); case LoadingMode.Lazy: return AcquireLazy(); default: Debug.Assert(loadingMode == LoadingMode.LazyExpanding, "Unknown LoadingMode encountered in Acquire method."); return AcquireLazyExpanding(); } } public void Release(T item) { lock (itemStore) { itemStore.Store(item); } sync.Release(); }
如前所述,我们使用Semaphore来控制并发性,而不是宗教检查商店的状态.只要正确释放获取的物品,就无需担心.
最后但并非最不重要的一点是:
public void Dispose() { if (isDisposed) { return; } isDisposed = true; if (typeof(IDisposable).IsAssignableFrom(typeof(T))) { lock (itemStore) { while (itemStore.Count > 0) { IDisposable disposable = (IDisposable)itemStore.Fetch(); disposable.Dispose(); } } } sync.Close(); } public bool IsDisposed { get { return isDisposed; } }
IsDisposed属性的目的将在稍后变得清晰.所有的主要Dispose方法确实要做的是如果它们实现IDisposable.
现在,您可以基本上可以使用try-finally块,但我不喜欢该语法,因为如果您开始在类和方法之间传递汇总资源,那么它将变得非常混乱.使用资源的主类可能甚至没有对池的引用.它确实变得非常凌乱,因此一种更好的方法是创建一个"智能"的合并对象.
假设我们从以下简单接口/类开始:
public interface IFoo : IDisposable { void Test(); } public class Foo : IFoo { private static int count = 0; private int num; public Foo() { num = Interlocked.Increment(ref count); } public void Dispose() { Console.WriteLine("Goodbye from Foo #{0}", num); } public void Test() { Console.WriteLine("Hello from Foo #{0}", num); } }
这是我们假装的一次性Foo资源,它实现了IFoo,并具有一些用于生成唯一身份的样板代码.我们要做的是创建另一个特殊的汇总对象:
public class PooledFoo : IFoo { private Foo internalFoo; private Pool<IFoo> pool; public PooledFoo(Pool<IFoo> pool) { if (pool == null) throw new ArgumentNullException("pool"); this.pool = pool; this.internalFoo = new Foo(); } public void Dispose() { if (pool.IsDisposed) { internalFoo.Dispose(); } else { pool.Release(this); } } public void Test() { internalFoo.Test(); } }
这只是将所有"真实"方法代替其内部IFoo(我们可以使用像Castle这样的动态代理库来做到这一点,但我不会介入).它还维护对创建它的Pool的引用,因此当我们Dispose此对象时,它会自动释放回池. 除了已经处理池时 - 这意味着我们处于"清理"模式,在这种情况下,它实际上 而不是清理内部资源.
使用上面的方法,我们可以编写这样的代码:
// Create the pool early Pool<IFoo> pool = new Pool<IFoo>(PoolSize, p => new PooledFoo(p), LoadingMode.Lazy, AccessMode.Circular); // Sometime later on... using (IFoo foo = pool.Acquire()) { foo.Test(); }
这是非常能够做的好事.这意味着使用的代码IFoo(与创建它的代码相反)实际上不需要意识到池.您甚至可以使用您喜欢的DI库和Pool<T>作为提供商/工厂的注入 IFoo对象.
我已经放了 pastebin上的代码 您的复制娱乐.还有一个简短的测试程序您可以使用不同的加载/访问模式和多线程条件来播放它是线程安全而不是越野车.
让我知道您是否对此有任何疑问或疑虑.
其他推荐答案
类似的东西可能适合您的需求.
/// <summary> /// Represents a pool of objects with a size limit. /// </summary> /// <typeparam name="T">The type of object in the pool.</typeparam> public sealed class ObjectPool<T> : IDisposable where T : new() { private readonly int size; private readonly object locker; private readonly Queue<T> queue; private int count; /// <summary> /// Initializes a new instance of the ObjectPool class. /// </summary> /// <param name="size">The size of the object pool.</param> public ObjectPool(int size) { if (size <= 0) { const string message = "The size of the pool must be greater than zero."; throw new ArgumentOutOfRangeException("size", size, message); } this.size = size; locker = new object(); queue = new Queue<T>(); } /// <summary> /// Retrieves an item from the pool. /// </summary> /// <returns>The item retrieved from the pool.</returns> public T Get() { lock (locker) { if (queue.Count > 0) { return queue.Dequeue(); } count++; return new T(); } } /// <summary> /// Places an item in the pool. /// </summary> /// <param name="item">The item to place to the pool.</param> public void Put(T item) { lock (locker) { if (count < size) { queue.Enqueue(item); } else { using (item as IDisposable) { count--; } } } } /// <summary> /// Disposes of items in the pool that implement IDisposable. /// </summary> public void Dispose() { lock (locker) { count = 0; while (queue.Count > 0) { using (queue.Dequeue() as IDisposable) { } } } } }
示例用法
public class ThisObject { private readonly ObjectPool<That> pool = new ObjectPool<That>(100); public void ThisMethod() { var that = pool.Get(); try { // Use that .... } finally { pool.Put(that); } } }
问题描述
Does anyone have a good resource on implementing a shared object pool strategy for a limited resource in vein of Sql connection pooling? (ie would be implemented fully that it is thread safe).
To follow up in regards to @Aaronaught request for clarification the pool usage would be for load balancing requests to an external service. To put it in a scenario that would probably be easier to immediately understand as opposed to my direct situtation. I have a session object that functions similarly to the ISession object from NHibernate. That each unique session manages it's connection to the database. Currently I have 1 long running session object and am encountering issues where my service provider is rate limiting my usage of this individual session.
Due to their lack of expectation that a single session would be treated as a long running service account they apparently treat it as a client that is hammering their service. Which brings me to my question here, instead of having 1 individual session I would create a pool of different sessions and split the requests up to the service across those multiple sessions instead of creating a single focal point as I was previously doing.
Hopefully that background offers some value but to directly answer some of your questions:
Q: Are the objects expensive to create?
A: No objects are a pool of limited resources
Q: Will they be acquired/released very frequently?
A: Yes, once again they can be thought of NHibernate ISessions where 1 is usually acquired and released for the duration of every single page request.
Q: Will a simple first-come-first-serve suffice or do you need something more intelligent, i.e. that would prevent starvation?
A: A simple round robin type distribution would suffice, by starvation I assume you mean if there are no available sessions that callers become blocked waiting for releases. This isn't really applicable since the sessions can be shared by different callers. My goal is distribute the usage across multiple sessions as opposed to 1 single session.
I believe this is probably a divergence from a normal usage of an object pool which is why I originally left this part out and planned just to adapt the pattern to allow sharing of objects as opposed to allowing a starvation situation to ever occur.
Q: What about things like priorities, lazy vs. eager loading, etc.?
A: There is no prioritization involved, for simplicity's sake just assume that I would create the pool of available objects at the creation of the pool itself.
推荐答案
Object Pooling in .NET Core
The dotnet core has an implementation of object pooling added to the base class library (BCL). You can read the original GitHub issue here and view the code for System.Buffers. Currently the ArrayPool is the only type available and is used to pool arrays. There is a nice blog post here.
namespace System.Buffers { public abstract class ArrayPool<T> { public static ArrayPool<T> Shared { get; internal set; } public static ArrayPool<T> Create(int maxBufferSize = <number>, int numberOfBuffers = <number>); public T[] Rent(int size); public T[] Enlarge(T[] buffer, int newSize, bool clearBuffer = false); public void Return(T[] buffer, bool clearBuffer = false); } }
An example of its usage can be seen in ASP.NET Core. Because it is in the dotnet core BCL, ASP.NET Core can share it's object pool with other objects such as Newtonsoft.Json's JSON serializer. You can read this blog post for more information on how Newtonsoft.Json is doing this.
Object Pooling in Microsoft Roslyn C# Compiler
The new Microsoft Roslyn C# compiler contains the ObjectPool type, which is used to pool frequently used objects which would normally get new'ed up and garbage collected very often. This reduces the amount and size of garbage collection operations which have to happen. There are a few different sub-implementations all using ObjectPool (See: Why are there so many implementations of Object Pooling in Roslyn?).
1 - SharedPools - Stores a pool of 20 objects or 100 if the BigDefault is used.
// Example 1 - In a using statement, so the object gets freed at the end. using (PooledObject<Foo> pooledObject = SharedPools.Default<List<Foo>>().GetPooledObject()) { // Do something with pooledObject.Object } // Example 2 - No using statement so you need to be sure no exceptions are not thrown. List<Foo> list = SharedPools.Default<List<Foo>>().AllocateAndClear(); // Do something with list SharedPools.Default<List<Foo>>().Free(list); // Example 3 - I have also seen this variation of the above pattern, which ends up the same as Example 1, except Example 1 seems to create a new instance of the IDisposable [PooledObject<T>][4] object. This is probably the preferred option if you want fewer GC's. List<Foo> list = SharedPools.Default<List<Foo>>().AllocateAndClear(); try { // Do something with list } finally { SharedPools.Default<List<Foo>>().Free(list); }
2 - ListPool and StringBuilderPool - Not strictly separate implementations but wrappers around the SharedPools implementation shown above specifically for List and StringBuilder's. So this re-uses the pool of objects stored in SharedPools.
// Example 1 - No using statement so you need to be sure no exceptions are thrown. StringBuilder stringBuilder= StringBuilderPool.Allocate(); // Do something with stringBuilder StringBuilderPool.Free(stringBuilder); // Example 2 - Safer version of Example 1. StringBuilder stringBuilder= StringBuilderPool.Allocate(); try { // Do something with stringBuilder } finally { StringBuilderPool.Free(stringBuilder); }
3 - PooledDictionary and PooledHashSet - These use ObjectPool directly and have a totally separate pool of objects. Stores a pool of 128 objects.
// Example 1 PooledHashSet<Foo> hashSet = PooledHashSet<Foo>.GetInstance() // Do something with hashSet. hashSet.Free(); // Example 2 - Safer version of Example 1. PooledHashSet<Foo> hashSet = PooledHashSet<Foo>.GetInstance() try { // Do something with hashSet. } finally { hashSet.Free(); }
Microsoft.IO.RecyclableMemoryStream
This library provides pooling for MemoryStream objects. It's a drop-in replacement for System.IO.MemoryStream. It has exactly the same semantics. It was designed by Bing engineers. Read the blog post here or see the code on GitHub.
var sourceBuffer = new byte[]{0,1,2,3,4,5,6,7}; var manager = new RecyclableMemoryStreamManager(); using (var stream = manager.GetStream()) { stream.Write(sourceBuffer, 0, sourceBuffer.Length); }
Note that RecyclableMemoryStreamManager should be declared once and it will live for the entire process–this is the pool. It is perfectly fine to use multiple pools if you desire.
其他推荐答案
This question is a little trickier than one might expect due to several unknowns: The behaviour of the resource being pooled, the expected/required lifetime of objects, the real reason that the pool is required, etc. Typically pools are special-purpose - thread pools, connection pools, etc. - because it is easier to optimize one when you know exactly what the resource does and more importantly have control over how that resource is implemented.
Since it's not that simple, what I've tried to do is offer up a fairly flexible approach that you can experiment with and see what works best. Apologies in advance for the long post, but there is a lot of ground to cover when it comes to implementing a decent general-purpose resource pool. and I'm really only scratching the surface.
A general-purpose pool would have to have a few main "settings", including:
- Resource loading strategy - eager or lazy;
- Resource loading mechanism - how to actually construct one;
- Access strategy - you mention "round robin" which is not as straightforward as it sounds; this implementation can use a circular buffer which is similar, but not perfect, because the pool has no control over when resources are actually reclaimed. Other options are FIFO and LIFO; FIFO will have more of a random-access pattern, but LIFO makes it significantly easier to implement a Least-Recently-Used freeing strategy (which you said was out of scope, but it's still worth mentioning).
For the resource loading mechanism, .NET already gives us a clean abstraction - delegates.
private Func<Pool<T>, T> factory;
Pass this through the pool's constructor and we're about done with that. Using a generic type with a new() constraint works too, but this is more flexible.
Of the other two parameters, the access strategy is the more complicated beast, so my approach was to use an inheritance (interface) based approach:
public class Pool<T> : IDisposable { // Other code - we'll come back to this interface IItemStore { T Fetch(); void Store(T item); int Count { get; } } }
The concept here is simple - we'll let the public Pool class handle the common issues like thread-safety, but use a different "item store" for each access pattern. LIFO is easily represented by a stack, FIFO is a queue, and I've used a not-very-optimized-but-probably-adequate circular buffer implementation using a List<T> and index pointer to approximate a round-robin access pattern.
All of the classes below are inner classes of the Pool<T> - this was a style choice, but since these really aren't meant to be used outside the Pool, it makes the most sense.
class QueueStore : Queue<T>, IItemStore { public QueueStore(int capacity) : base(capacity) { } public T Fetch() { return Dequeue(); } public void Store(T item) { Enqueue(item); } } class StackStore : Stack<T>, IItemStore { public StackStore(int capacity) : base(capacity) { } public T Fetch() { return Pop(); } public void Store(T item) { Push(item); } }
These are the obvious ones - stack and queue. I don't think they really warrant much explanation. The circular buffer is a little more complicated:
class CircularStore : IItemStore { private List<Slot> slots; private int freeSlotCount; private int position = -1; public CircularStore(int capacity) { slots = new List<Slot>(capacity); } public T Fetch() { if (Count == 0) throw new InvalidOperationException("The buffer is empty."); int startPosition = position; do { Advance(); Slot slot = slots[position]; if (!slot.IsInUse) { slot.IsInUse = true; --freeSlotCount; return slot.Item; } } while (startPosition != position); throw new InvalidOperationException("No free slots."); } public void Store(T item) { Slot slot = slots.Find(s => object.Equals(s.Item, item)); if (slot == null) { slot = new Slot(item); slots.Add(slot); } slot.IsInUse = false; ++freeSlotCount; } public int Count { get { return freeSlotCount; } } private void Advance() { position = (position + 1) % slots.Count; } class Slot { public Slot(T item) { this.Item = item; } public T Item { get; private set; } public bool IsInUse { get; set; } } }
I could have picked a number of different approaches, but the bottom line is that resources should be accessed in the same order that they were created, which means that we have to maintain references to them but mark them as "in use" (or not). In the worst-case scenario, only one slot is ever available, and it takes a full iteration of the buffer for every fetch. This is bad if you have hundreds of resources pooled and are acquiring and releasing them several times per second; not really an issue for a pool of 5-10 items, and in the typical case, where resources are lightly used, it only has to advance one or two slots.
Remember, these classes are private inner classes - that is why they don't need a whole lot of error-checking, the pool itself restricts access to them.
Throw in an enumeration and a factory method and we're done with this part:
// Outside the pool public enum AccessMode { FIFO, LIFO, Circular }; private IItemStore itemStore; // Inside the Pool private IItemStore CreateItemStore(AccessMode mode, int capacity) { switch (mode) { case AccessMode.FIFO: return new QueueStore(capacity); case AccessMode.LIFO: return new StackStore(capacity); default: Debug.Assert(mode == AccessMode.Circular, "Invalid AccessMode in CreateItemStore"); return new CircularStore(capacity); } }
The next problem to solve is loading strategy. I've defined three types:
public enum LoadingMode { Eager, Lazy, LazyExpanding };
The first two should be self-explanatory; the third is sort of a hybrid, it lazy-loads resources but doesn't actually start re-using any resources until the pool is full. This would be a good trade-off if you want the pool to be full (which it sounds like you do) but want to defer the expense of actually creating them until first access (i.e. to improve startup times).
The loading methods really aren't too complicated, now that we have the item-store abstraction:
private int size; private int count; private T AcquireEager() { lock (itemStore) { return itemStore.Fetch(); } } private T AcquireLazy() { lock (itemStore) { if (itemStore.Count > 0) { return itemStore.Fetch(); } } Interlocked.Increment(ref count); return factory(this); } private T AcquireLazyExpanding() { bool shouldExpand = false; if (count < size) { int newCount = Interlocked.Increment(ref count); if (newCount <= size) { shouldExpand = true; } else { // Another thread took the last spot - use the store instead Interlocked.Decrement(ref count); } } if (shouldExpand) { return factory(this); } else { lock (itemStore) { return itemStore.Fetch(); } } } private void PreloadItems() { for (int i = 0; i < size; i++) { T item = factory(this); itemStore.Store(item); } count = size; }
The size and count fields above refer to the maximum size of the pool and the total number of resources owned by the pool (but not necessarily available), respectively. AcquireEager is the simplest, it assumes that an item is already in the store - these items would be preloaded at construction, i.e. in the PreloadItems method shown last.
AcquireLazy checks to see if there are free items in the pool, and if not, it creates a new one. AcquireLazyExpanding will create a new resource as long as the pool hasn't reached its target size yet. I've tried to optimize this to minimize locking, and I hope I haven't made any mistakes (I have tested this under multi-threaded conditions, but obviously not exhaustively).
You might be wondering why none of these methods bother checking to see whether or not the store has reached the maximum size. I'll get to that in a moment.
Now for the pool itself. Here is the full set of private data, some of which has already been shown:
private bool isDisposed; private Func<Pool<T>, T> factory; private LoadingMode loadingMode; private IItemStore itemStore; private int size; private int count; private Semaphore sync;
Answering the question I glossed over in the last paragraph - how to ensure we limit the total number of resources created - it turns out that the .NET already has a perfectly good tool for that, it's called Semaphore and it's designed specifically to allow a fixed number of threads access to a resource (in this case the "resource" is the inner item store). Since we're not implementing a full-on producer/consumer queue, this is perfectly adequate for our needs.
The constructor looks like this:
public Pool(int size, Func<Pool<T>, T> factory, LoadingMode loadingMode, AccessMode accessMode) { if (size <= 0) throw new ArgumentOutOfRangeException("size", size, "Argument 'size' must be greater than zero."); if (factory == null) throw new ArgumentNullException("factory"); this.size = size; this.factory = factory; sync = new Semaphore(size, size); this.loadingMode = loadingMode; this.itemStore = CreateItemStore(accessMode, size); if (loadingMode == LoadingMode.Eager) { PreloadItems(); } }
Should be no surprises here. Only thing to note is the special-casing for eager loading, using the PreloadItems method already shown earlier.
Since almost everything's been cleanly abstracted away by now, the actual Acquire and Release methods are really very straightforward:
public T Acquire() { sync.WaitOne(); switch (loadingMode) { case LoadingMode.Eager: return AcquireEager(); case LoadingMode.Lazy: return AcquireLazy(); default: Debug.Assert(loadingMode == LoadingMode.LazyExpanding, "Unknown LoadingMode encountered in Acquire method."); return AcquireLazyExpanding(); } } public void Release(T item) { lock (itemStore) { itemStore.Store(item); } sync.Release(); }
As explained earlier, we're using the Semaphore to control concurrency instead of religiously checking the status of the item store. As long as acquired items are correctly released, there's nothing to worry about.
Last but not least, there's cleanup:
public void Dispose() { if (isDisposed) { return; } isDisposed = true; if (typeof(IDisposable).IsAssignableFrom(typeof(T))) { lock (itemStore) { while (itemStore.Count > 0) { IDisposable disposable = (IDisposable)itemStore.Fetch(); disposable.Dispose(); } } } sync.Close(); } public bool IsDisposed { get { return isDisposed; } }
The purpose of that IsDisposed property will become clear in a moment. All the main Dispose method really does is dispose the actual pooled items if they implement IDisposable.
Now you can basically use this as-is, with a try-finally block, but I'm not fond of that syntax, because if you start passing around pooled resources between classes and methods then it's going to get very confusing. It's possible that the main class that uses a resource doesn't even have a reference to the pool. It really becomes quite messy, so a better approach is to create a "smart" pooled object.
Let's say we start with the following simple interface/class:
public interface IFoo : IDisposable { void Test(); } public class Foo : IFoo { private static int count = 0; private int num; public Foo() { num = Interlocked.Increment(ref count); } public void Dispose() { Console.WriteLine("Goodbye from Foo #{0}", num); } public void Test() { Console.WriteLine("Hello from Foo #{0}", num); } }
Here's our pretend disposable Foo resource which implements IFoo and has some boilerplate code for generating unique identities. What we do is to create another special, pooled object:
public class PooledFoo : IFoo { private Foo internalFoo; private Pool<IFoo> pool; public PooledFoo(Pool<IFoo> pool) { if (pool == null) throw new ArgumentNullException("pool"); this.pool = pool; this.internalFoo = new Foo(); } public void Dispose() { if (pool.IsDisposed) { internalFoo.Dispose(); } else { pool.Release(this); } } public void Test() { internalFoo.Test(); } }
This just proxies all of the "real" methods to its inner IFoo (we could do this with a Dynamic Proxy library like Castle, but I won't get into that). It also maintains a reference to the Pool that creates it, so that when we Dispose this object, it automatically releases itself back to the pool. Except when the pool has already been disposed - this means we are in "cleanup" mode and in this case it actually cleans up the internal resource instead.
Using the approach above, we get to write code like this:
// Create the pool early Pool<IFoo> pool = new Pool<IFoo>(PoolSize, p => new PooledFoo(p), LoadingMode.Lazy, AccessMode.Circular); // Sometime later on... using (IFoo foo = pool.Acquire()) { foo.Test(); }
This is a very good thing to be able to do. It means that the code which uses the IFoo (as opposed to the code which creates it) does not actually need to be aware of the pool. You can even inject IFoo objects using your favourite DI library and the Pool<T> as the provider/factory.
I've put the complete code on PasteBin for your copy-and-pasting enjoyment. There's also a short test program you can use to play around with different loading/access modes and multithreaded conditions, to satisfy yourself that it's thread-safe and not buggy.
Let me know if you have any questions or concerns about any of this.
其他推荐答案
Something like this might suit your needs.
/// <summary> /// Represents a pool of objects with a size limit. /// </summary> /// <typeparam name="T">The type of object in the pool.</typeparam> public sealed class ObjectPool<T> : IDisposable where T : new() { private readonly int size; private readonly object locker; private readonly Queue<T> queue; private int count; /// <summary> /// Initializes a new instance of the ObjectPool class. /// </summary> /// <param name="size">The size of the object pool.</param> public ObjectPool(int size) { if (size <= 0) { const string message = "The size of the pool must be greater than zero."; throw new ArgumentOutOfRangeException("size", size, message); } this.size = size; locker = new object(); queue = new Queue<T>(); } /// <summary> /// Retrieves an item from the pool. /// </summary> /// <returns>The item retrieved from the pool.</returns> public T Get() { lock (locker) { if (queue.Count > 0) { return queue.Dequeue(); } count++; return new T(); } } /// <summary> /// Places an item in the pool. /// </summary> /// <param name="item">The item to place to the pool.</param> public void Put(T item) { lock (locker) { if (count < size) { queue.Enqueue(item); } else { using (item as IDisposable) { count--; } } } } /// <summary> /// Disposes of items in the pool that implement IDisposable. /// </summary> public void Dispose() { lock (locker) { count = 0; while (queue.Count > 0) { using (queue.Dequeue() as IDisposable) { } } } } }
Example Usage
public class ThisObject { private readonly ObjectPool<That> pool = new ObjectPool<That>(100); public void ThisMethod() { var that = pool.Get(); try { // Use that .... } finally { pool.Put(that); } } }