Should i use ravendb
The list of possible tools for your case depends really on the numbers: how many records per second you want to insert, what will be the database size, what queries must be supported and how many of them per second etc. You told that you'll be doing updates and a primary key lookup only - in this case a simple key-value store might be for you but using a raw key-value store is difficult because of very basic functionality available out of the box.
Document databases are really nice to work with if you have complex document structures that are hard to map into RDBMS, but I don't think your documents are very complex. Generally speaking you have a very broad choice of tools and almost all will work ok so I'd recommend using something stable, proven, easy to use and known to you.
Oren Eini Ayende Rahien. You can save do multiple operations using the low level commands, see the DatabaseCommand. Whenever evaluating a technology, the most important aspect to consider is whatever it fits your needs.
Considering the low number of total ships, I think that this might be more than viable. And then you can take advantage of the other features of RavenDB that you mentioned, like the nice API and easy replication. Well, unfortunately the range of possible solutions is limited. What I need is something being able to handle about about - writes per second with small amounts of data.
This alone is not the big problem. Many products offer of this kind of performance with reasonable hardware, but additionally I need synchronous replication to have the data available on a second node if there is a hardware error. The switchover needs to be done in a matter of seconds and data loss is not an option.
That's the reason I need transactions, besides being able to do multiple updates as part of a single operation. In a relational model SQL Server with synchronous mirroring would fit these requirements pretty well. AppFabric, memcached. However the write performance what I'm currently measuring is not what I expected. This puzzles me little bit, because usually the capabilities of the disc IO system are the limiting factor. This is not the case here. The windows performance monitor clearly shows me the CPU is going crazy and not enough.
This is probably the trade off by RavenDB; invest more CPU cycles during put operations to be very fast later on once clients start looking for data. Anyway, thanks very much for your inputs. I'm sure I'll find something which suits my needs. Domo, We routinely see thousands of writes per second on commodity hardware, so your numbers seems strange.
It happens immediately, but it happens in a background thread. Matt Warren. Store company ;. SaveChanges ;. Delete company.
Id , null ;. So I guess there's a lot of overhead involved. Chris Marisic. This is a standard transaction log, this is one of the most dead set examples of SQL Server sweet spot. If I was going to create a credit card processing company for my inbound transactions I wouldn't use Raven, not because I have any qualms with its performance or reliability. This index is then permanent, i.
The index needs to be built before my ad-hoc query can execute, so there's a delay. And you say even if I do this hundreds of times, and keep doing it, I don't have a problem? That would be impressive. Or did I just get the process wrong? Stefan, Ad hoc queries will try to find an existing index, if there is no index that matches this query, it will be created.
There might be a delay while the query is being executed, yes. Dynamic indexes created because of ad hoc queries will hang around for a while, ready to serve the next matching query.
If there isn't enough activity, they will go away. If there is a lot of activity, they will become permanent. Do you have any real-world customer stories showing how RavenDB handles large amounts of data and transactions?
What's the failover-story? With all these paradigm shifts, it's hard to convince anybody that what looks good in theory or works well in the lab is actually ready for critical enterprise apps. The latest MongoDB data-loss rumors didn't help.
Stefan, Yes, we have several big clients working already. What I'm interested in is not so much the developer story, that one looks pretty solid. I'd like to know how it compares to, say, SQL Server from an operations perspective. Because one good thing about shipping an app based on SQL Server is that it's implicitly trusted by the customer, and they already have experience and operational procedures for it in place.
There's features like log shipping and database mirroring, transparent data encryption, you name it. Stefan, MonogoDB is intentionally design to be really fast at the expense of being reliable. Personally, I think that this is an insane decision for most scenarios, but that is what it is designed for.
We are actually using the same backend as Exchange and Active Directory. We have support for each of the items that you have mentioned, btw. I'm talking about perception too, not just facts. Not only the programmer needs to be convinced, but the customer too. That you're using ESE is helping a lot, didn't know that. Where can I find documentation of these things? The documentation section on ravendb.
Doesn't seem to have much administration information either, just a few bits here and there within the developer docs. At least google can't find any, the search box at the top of the page does not seem to work. Convincung developers is one thing. I would also consider posting a list of SQL Server and Oracle featuers and describing how the same effect is reached using RavenDB or why it is not required.
I'm concerned that the latency between a user saving changes to a record and that record being ready to be queried could be a problem in some scenarios. Markdown turns plain text formatting into fancy HTML formatting. Get in touch with me: oren ravendb. Tweet Share Share 30 comments Tags: Raven. Rafal, The problem with RDBMS is that even if it is designated OLTP, it is very tempting to do just "some" reports on that, because it allows it, and then you die when you grow big And I am pretty sure that we are going to be using less memory and CPU for standard operations, if only because of the different modeling requirements.
Ayende Sorry if this is a RTFM question, but if you issue an ad-hoc query, does Raven attempt to create an index based on it? Sorry, another RTFM question. Thanks, Stefan. Related topic: can RaveDB create indexes that include data from other partitions? Indexes are reused,yes. Stefan, We would love your comments on the new docs at beta. Designed with Care Spend more time coding, and less time on the phone with support: self-solve problems and reduce overhead.
Privacy Settings. Advanced OK. Advanced Settings. I agree to share my website usage data for marketing analysis purposes.
0コメント