通过http和json同步2个sqlite表的最好方法是什么?[英] What is the best way to sync 2 sqlite tables over http and json?

本文是小编为大家收集整理的关于通过http和json同步2个sqlite表的最好方法是什么?的处理/解决方法,可以参考本文帮助大家快速定位并解决问题,中文翻译不准确的可切换到English标签页查看源文。

问题描述

我有一个相当简单的同步问题.我有一个桌子,其中有大约10列,我想在3个不同客户端的SQLite文件之间保持同步:iPhone客户端,浏览器客户端和Ruby on Rails客户端.因此,我需要一个简单的转换解决方案,该解决方案将适用于所有3个,即我可以在JavaScript,Objective C和Ruby中轻松实现它,并且可以与HTTP上的JSON一起使用.我已经研究了其他同步解决方案的各种组件,例如GIT中的某些教程,这些教程来自Google Gears社区,以及一个名为ACTS_AS_REPLICA的Rails插件.我的天真方法是简单地在数据库中创建最后一个同步的时间戳,然后创建所有删除的更换式的时间戳. (我不允许更新表中的条目).然后,我可以检索自上次时间戳以来的所有新条目,然后将其与删除结合在一起,然后将ChangElog作为JSON在3个解决方案之间通过HTTP发送.

我应该考虑使用SHA1哈希或每个条目的UUID还是最后一个同步的时间戳足够?我如何确保没有重复的条目?我可以遵循更简单的算法吗?

推荐答案

我假设更改可能是最后的.我不知道插入和更新的性格,但这是我的想法;

  • 我将在本月和几个月前几天(在这种情况下)SHA1(或MD5).与这些指纹进行比较是一种快速的观察方法. (我今天没有离开)
  • 如果前几个月有差异;
    • 如果一个月的音量太大,我们可以分开一个月,只是在即时生成每日指纹,而不是整个月.
    • 否则,我们可以按照每天的每日更改来对待每月更改.
  • 发现更改发生在哪里后,主副本将在此期间发送所有唯一ID的列表. (始终发送今天的信息)
  • 然后将从属删除必须删除的内容并编译要插入的ID列表.
  • 主人仅发送这些记录(完全).

可以根据数据量调整时间类别(日,月).

当然,这是一种天真而简单的算法.如果我正在处理敏感/关键数据,我会寻找 tracsactional algorithm.算法.

本文地址:https://www.itbaoku.cn/post/597604.html

问题描述

I have a fairly simple sync problem. I have a table with about 10 columns that I want to keep in sync between a sqlite file on 3 different clients: an Iphone client, a browser client, and a Ruby on Rails client. So I need a simple sycing solution that will work for all 3, i.e. I can easily implement it in Javascript, Objective C, and Ruby and it works with JSON over HTTP. I have looked at various components of other syncing solutions like the one in git, some of the tutorials that have come out of the Google gears community, and a rails plugin called acts_as_replica. My naive approach would be to simply create a last synced timestamp in the database and then create a changelog of all deletes as they are made. (I don't allow updates to entries in the table). I can then retrieve all the new entries since the last timestamp, combine then with the deletes, and send a changelog as json over http between the 3 solutions.

Should I consider the use of SHA1 hash or a UUID of each entry or is a last synced timestamp sufficient? How do I make sure there are no duplicate entries? Is there a simpler algorithm I could follow?

推荐答案

I am assuming changes are likely to be at the end. I don't know the character of insert and updates but here is my idea;

  • I would SHA1 (or MD5, it doesn't matter in this case) days of the current month and months before. Comparing against these fingerprints is a fast way to see were the differences are. (I am leaving today unhashed)
  • If previous months have differences;
    • If volume for a month is too big we can then split the month and simply generate daily fingerprint on the fly instead of comparing the whole month.
    • Otherwise we can treat a monthly change the same way we treat a daily change.
  • After finding out where the changes occur, master copy would send a list of all unique id's for that period. (Always sending today's info)
  • The slave then deletes what has to be deleted and compiles a list of id's to be inserted.
  • The master sends only those records (in full).

The time categories (day, month) can be adjusted according to the data volume.

Of course this is a naive and simple algorithm. If I was processing sensitive/critical data I would look for a transactional algorithm.