Icon View Thread

The following is the text of the current message along with any replies.
Messages 11 to 20 of 23 total
Thread ElevateDB 2021 Wishlist!
Sat, Jan 23 2021 11:53 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Ian,

<< IB - Interesting.  I am not sure what exactly this will mean until I see it in the flesh. >>

It means that you won't *have* to use SQL, etc. if you just want a simple storage solution for data that still has fail-safe, atomic, transactional access.

<< IB - AES 256bit? >>

Yes - this is already in EWB 3:

https://www.elevatesoft.com/manual?action=viewtype&id=ewb3&type=TEncryptionType

<< IB - Any improvement in the compression algorithm for Backups? >>

I'm not sure if I'll have time to implement that, but the backups will be able to be executed concurrently with other transactions without issue, due to the improved transaction model.

Tim Young
Elevate Software
www.elevatesoft.com
Sat, Jan 23 2021 12:51 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Michael,

<< I guess the list does not stop on those four right ? Smiley>>

Those are the main improvements that I'm shooting for, and the main areas that will form the foundation going forward.

<< If you manage to have a Single File Database with Log-structured merge-tree Indexes (aka RocksDB) with Multi-OS support it will be the First DB with those futures easily get a large amount of SQLlite developers ....
(Since most PC nowadays are using SDD and mobiles are having Flash memories) >>

I'm not entirely sure that LSM trees are the correct solution to random write performance issues.  They definitely exhibit a "not invented here" aspect that fails to recognize that segments are almost exactly like large leaf index pages (sorted arrays of keys), and the sparse indexes for segments are just like higher-level internal nodes in a B-Tree index.  Plus, the additional layers like the sparse segment index added to the base architecture are there to effectively patch performance holes (specifically, read performance) in the original architecture.  In general, I tend to preference read performance over write performance, and tend to go with architectures/algorithms that favor the former.

There are other solutions that don't involve artificially mixing higher-level concepts like indexes with the issues of I/O organization.  I'm leaning more towards database page maps that allow the lower-level storage engine code to re-org the on-disk storage as-needed, even while the database is in use.  This can be done with simple I/O hints that still keep the two layers separate, and achieves the same goal of reducing random I/O while not harming either read or write performance.  Write performance is already mitigated due to the fact that the write-ahead log is sequential and is the only data necessary for actual recovery.

Tim Young
Elevate Software
www.elevatesoft.com
Sat, Jan 23 2021 6:29 PMPermanent Link

Charalampos Michael

Tim Young [Elevate Software] wrote:

Michael,

<<
I'm not entirely sure that LSM trees are the correct solution to random write performance issues.  They definitely exhibit a "not invented here" aspect that fails to recognize that segments are almost exactly like large leaf index There are other solutions that don't involve artificially mixing higher-level concepts like indexes with the issues of I/O organization.  I'm leaning more towards database page maps that allow the lower-level storage engine code to re-org the on-disk storage as-needed, even while the database is in use.  This can be done with simple I/O hints that still keep the two layers separate, and achieves the same goal of reducing random I/O while not harming either read or write performance.  Write performance is already mitigated due to the fact that the write-ahead log is sequential and is the only data necessary for actual recovery.>>

Well, You're the boss! You know what's best for us!

ps: Thank for all these technical info! :D
Sun, Jan 24 2021 4:03 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim


Looks good, and I fully agree with your decision to limit to XE2 and upwards but since I'm parked on D2007 it means I probably won't get to play with it.

Out of interest how much does limiting to XE2 and above diminish the IFDEF plague?

Roy Lambert
Mon, Feb 1 2021 3:26 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Roy,

<< Looks good, and I fully agree with your decision to limit to XE2 and upwards but since I'm parked on D2007 it means I probably won't get to play with it. >>

The engine will still work back to D5.  The issue is the source code to the engine and how the engine is used with the older versions (DLL vs DCU units that can be compiled into a binary).

<< Out of interest how much does limiting to XE2 and above diminish the IFDEF plague? >>

A lot.  There are certain versions where major changes were made to Delphi's RTL/DB units that affect how core functionality needs to be coded (2009 was one of those, for example).  Anything in XE2 and above is fairly compatible from version to version, and contains most of the modern improvements to the Delphi RTL.

The main improvement is just eliminating the engine source code distribution - that gives me the ability to build the core engine with just a handful of builds, instead of the current 124 combinations of platform, Delphi/Lazarus version, etc. that need to be targeted.

Tim Young
Elevate Software
www.elevatesoft.com
Tue, Feb 2 2021 3:29 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim


>The engine will still work back to D5. The issue is the source code to the engine and how the engine is used with the older versions (DLL vs DCU units that can be compiled into a binary).

What does that mean in relation to user defined functions eg the WordGenerator & TextFilter?

><< Out of interest how much does limiting to XE2 and above diminish the IFDEF plague? >>
>
>A lot. There are certain versions where major changes were made to Delphi's RTL/DB units that affect how core functionality needs to be coded (2009 was one of those, for example). Anything in XE2 and above is fairly compatible from version to version, and contains most of the modern improvements to the Delphi RTL.

Having had fun (for some definitions of the word) sorting out code intended for multiple versions and having to wade through the IFDEFS and how the IDE can take you to the wrong place you have my sympath with that.

Roy
Thu, Feb 4 2021 8:19 AMPermanent Link

Teco

TECHNOLOG Systems GmbH

What will be with the Lazarus Version? We use Version 2 for shared users.
Will the server available for the Lazarus Version?



Tim Young [Elevate Software] wrote:

.....
1) It will be single-file only and file-sharing access will finally have to go in favor of two modes: local, single-user or database server, multi-user.  This is necessary to alleviate the Windows SMB issues with multi-user file-sharing  and to permit more advanced locking/transaction architectures.
Mon, Feb 8 2021 1:20 AMPermanent Link

Yusuf Zorlu

MicrotronX - Speditionssoftware vom Profi

Tim Young [Elevate Software] wrote:

> Here are some general points about EDB 3:
>
> 1) It will be single-file only and file-sharing access will finally
> have to go in favor of two modes: local, single-user or database
> server, multi-user.  This is necessary to alleviate the Windows SMB
> issues with multi-user file-sharing  and to permit more advanced
> locking/transaction architectures.

Hi Tim, will this "single file" with database-multi-user-access be at
least or better in performance than EDB 2?

> 2) The engine itself is probably going to be closed-source or XE2 and
> higher only, with source code provided/available for the various
> clients (VCL, .NET, ODBC, PHP, etc).  It's starting to become way to
> cumbersome to support D5 -> 10.4 with the same set of source code,
> and I need to make it easier/quicker to get EDB builds out.  In
> addition, the EDB Manager will also be going closed-source or XE2 or
> higher only.

We don't need the source if we can compile the Server engine with
DCU's, will this be possible?

> 3) The engine is going to be split up into various pieces, so you
> will be able to, for example, just use the engine as a direct, local
> storage engine (without the SQL, user security, etc.) if you want.

Will that mean, that we will be able to use the client with single-file
only i.e. on android?


--
--
Yusuf Zorlu | MicrotronX
Tue, Feb 9 2021 2:10 AMPermanent Link

Charalampos Michael

And something very awesome NexusDB added that i would like to see into ElevateDB:
Multithreaded reindexing operations, spreads work across multiple CPU cores to finish faster

The words "spreads work across multiple CPU cores" (aka PPL) is a must in 2021!
Tue, Feb 9 2021 8:56 AMPermanent Link

Matthew Jones

Charalampos Michael wrote:

> The words "spreads work across multiple CPU cores" (aka PPL) is a must in 2021!

Yes, and no. 8-)

It is important to know what it is the task is that is being done - is it processor bound, or I/O bound.

I have a process that generates a few hundred zip files containing many large files. At the end, I then also need to get a hash of each zip file. Using C# and the parallel FOR facility, the cores on my Threadripper machine are all very busy when creating the zips. But when I then realised that the hash was taking 7 minutes, I figured parallelising that too would save time. So lots more threads, and I saved a few seconds. The time taken was all reading the disk, and asking 64 threads to read 64 files from the same disk doesn't help anything.

I don't know what's involved with re-creating an index, and it may be that a few threads working together to read/process/write might be an improvement. But just slapping more threads in may not improve a lot, and can actually slow things down. Quite apart from introducing errors.


--

Matthew Jones
« Previous PagePage 2 of 3Next Page »
Jump to Page:  1 2 3
Image