siddontang

siddontang

Member Since 10 years ago

PingCAP, China

Experience Points
2.7k
follower
Lessons Completed
0
follow
Lessons Completed
423
stars
Best Reply Awards
145
repos

42 contributions in the last year

Pinned
⚡ a MySQL proxy powered by Go
⚡ my golang lib
⚡ a fast distributed message queue implemented with go
⚡ libtnet is a tiny high performance c++ network lib, like tornado
⚡ Elasticsearch note
Activity
Dec
25
1 month ago
Activity icon
created tag
createdAt 1 month ago
Activity icon
published release v0.1

siddontang in siddontang/xkcdsay create published release v0.1

createdAt 1 month ago
push

siddontang push siddontang/xkcdsay

siddontang
siddontang

add xkcd license

Signed-off-by: siddontang [email protected]

commit sha: 143ef158b6c54619ba3aea7445f311116440336c

push time in 1 month ago
Activity icon
delete

siddontang in siddontang/xkcdsay delete branch add-license-1

deleted time in 1 month ago
Activity icon
created branch

siddontang in siddontang/xkcdsay create branch add-license-1

createdAt 1 month ago
Dec
24
1 month ago
push

siddontang push siddontang/xkcdsay

siddontang
siddontang

add lambda support to sync xkcd

Signed-off-by: siddontang [email protected]

siddontang
siddontang

Merge branch 'master' of github.com:siddontang/xkcdsay

Signed-off-by: siddontang [email protected]

commit sha: e83648654b39e1009b8e3d8b03a3896459886d31

push time in 1 month ago
push

siddontang push siddontang/xkcdsay

siddontang
siddontang

add a readme for xkcdsay

Signed-off-by: siddontang [email protected]

commit sha: ab2b05ee2f380dfb2bab20326cd0cc221584bafc

push time in 1 month ago
Dec
23
1 month ago
push

siddontang push siddontang/xkcdsay

siddontang
siddontang

sync xkcd comit to db

Signed-off-by: siddontang [email protected]

commit sha: 767494040103a727b8afa11a7101033984270d8a

push time in 1 month ago
Activity icon
issue

siddontang issue mattn/go-sixel

siddontang
siddontang

gosr shows an extra highlight '%' at the end of the output

Hi, I tried to show the xkcd pic in the terminator with gosr, but I found a highlight '%', is it as expected?

wget https://imgs.xkcd.com/comics/barrel_cropped_\(1\).jpg
gosr barrel_cropped_\(1\).jpg

image

Activity icon
issue

siddontang issue comment mattn/go-sixel

siddontang
siddontang

gosr shows an extra highlight '%' at the end of the output

Hi, I tried to show the xkcd pic in the terminator with gosr, but I found a highlight '%', is it as expected?

wget https://imgs.xkcd.com/comics/barrel_cropped_\(1\).jpg
gosr barrel_cropped_\(1\).jpg

image

siddontang
siddontang

Got it, I use zsh, and I use bash, the % disappeared

Activity icon
issue

siddontang issue mattn/go-sixel

siddontang
siddontang

gosr shows an extra highlight '%' at the end of the output

Hi, I tried to show the xkcd pic in the terminator with gosr, but I found a highlight '%', is it as expected?

wget https://imgs.xkcd.com/comics/barrel_cropped_\(1\).jpg
gosr barrel_cropped_\(1\).jpg

image

Activity icon
issue

siddontang issue comment aws/clock-bound

siddontang
siddontang

clock-bound-c: use array instead of vec for performance

Issue #, if available:

None

Description of changes:

I found that you here use vector to get timestamps, IMO, these functions are highly frequently called, so it is better to use array instead of vector.

I wrote a simple benchmark below:

#![feature(test)]
extern crate test;
use test::{black_box, Bencher};
#[bench]
fn bench_vec(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: Vec<u8> = Vec::new();
            // Inner closure, the actual test
            for i in 0..4 {
                request.push(i);
            }
            black_box(request);
        }
    });
}
#[bench]
fn bench_vec_with_capacity(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: Vec<u8> = Vec::with_capacity(4);
            // Inner closure, the actual test
            for i in 0..4 {
                request.push(i);
            }
            black_box(request);
        }
    });
}
#[bench]
fn bench_array(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: [u8; 4] = [0; 4];
            // Inner closure, the actual test
            for i in 0..4 {
                request[i] = i as u8;
            }
            black_box(request);
        }
    });
}

The benchmark result is:

test bench_array            ... bench:           4 ns/iter (+/- 0)
test bench_vec               ... bench:       1,014 ns/iter (+/- 143)
test bench_vec_with_capacity ... bench:         977 ns/iter (+/- 70)

As you can see, using array gains a better performance.

Because this repo doesn't have any unit tests, I have to test it by myself, seem it can work well, like:

cargo run --example now /run/clockboundd/clockboundd.sock
The UTC timestamp 2021-12-14 14:24:20.048961000 has the following error bounds.
In nanoseconds since the Unix epoch: (1639491860040783810,1639491860057138190)
In UTC in date/time format: (2021-12-14 14:24:20.040783810, 2021-12-14 14:24:20.057138190)

cargo run --example before /run/clockboundd/clockboundd.sock
1639491872937419000 nanoseconds since the Unix Epoch is not before the current time's error bounds.
Waiting 1 second...
1639491872937419000 nanoseconds since the Unix Epoch is before the current time's error bounds.

cargo run --example after /run/clockboundd/clockboundd.sock
1639491883489757000 nanoseconds since the Unix Epoch is after the current time's error bounds.
Waiting 2 seconds...
1639491883489757000 nanoseconds since the Unix Epoch is not after the current time's error bounds.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Dec
22
1 month ago
Dec
15
1 month ago
Activity icon
issue

siddontang issue comment aws/clock-bound

siddontang
siddontang

clock-bound-c: use array instead of vec for performance

Issue #, if available:

None

Description of changes:

I found that you here use vector to get timestamps, IMO, these functions are highly frequently called, so it is better to use array instead of vector.

I wrote a simple benchmark below:

#![feature(test)]
extern crate test;
use test::{black_box, Bencher};
#[bench]
fn bench_vec(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: Vec<u8> = Vec::new();
            // Inner closure, the actual test
            for i in 0..4 {
                request.push(i);
            }
            black_box(request);
        }
    });
}
#[bench]
fn bench_vec_with_capacity(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: Vec<u8> = Vec::with_capacity(4);
            // Inner closure, the actual test
            for i in 0..4 {
                request.push(i);
            }
            black_box(request);
        }
    });
}
#[bench]
fn bench_array(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: [u8; 4] = [0; 4];
            // Inner closure, the actual test
            for i in 0..4 {
                request[i] = i as u8;
            }
            black_box(request);
        }
    });
}

The benchmark result is:

test bench_array            ... bench:           4 ns/iter (+/- 0)
test bench_vec               ... bench:       1,014 ns/iter (+/- 143)
test bench_vec_with_capacity ... bench:         977 ns/iter (+/- 70)

As you can see, using array gains a better performance.

Because this repo doesn't have any unit tests, I have to test it by myself, seem it can work well, like:

cargo run --example now /run/clockboundd/clockboundd.sock
The UTC timestamp 2021-12-14 14:24:20.048961000 has the following error bounds.
In nanoseconds since the Unix epoch: (1639491860040783810,1639491860057138190)
In UTC in date/time format: (2021-12-14 14:24:20.040783810, 2021-12-14 14:24:20.057138190)

cargo run --example before /run/clockboundd/clockboundd.sock
1639491872937419000 nanoseconds since the Unix Epoch is not before the current time's error bounds.
Waiting 1 second...
1639491872937419000 nanoseconds since the Unix Epoch is before the current time's error bounds.

cargo run --example after /run/clockboundd/clockboundd.sock
1639491883489757000 nanoseconds since the Unix Epoch is after the current time's error bounds.
Waiting 2 seconds...
1639491883489757000 nanoseconds since the Unix Epoch is not after the current time's error bounds.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

siddontang
siddontang

yes, array. sometimes I use slice by myself. I have changed it.

push

siddontang push siddontang/clock-bound

siddontang
siddontang

clock-bound-c: use array instead of vec for performance

commit sha: b8f0ecc7b9de4cc491720f42021d5d69b8596c81

push time in 1 month ago
Dec
14
1 month ago
pull request

siddontang pull request aws/clock-bound

siddontang
siddontang

clock-bound-c: use slice instead of vec for performance

Issue #, if available:

None

Description of changes:

I found that you here use vector to get timestamps, IMO, these functions are highly frequently called, so it is better to use slice instead of vector.

I wrote a simple benchmark below:

#![feature(test)]
extern crate test;
use test::{black_box, Bencher};
#[bench]
fn bench_vec(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: Vec<u8> = Vec::new();
            // Inner closure, the actual test
            for i in 0..4 {
                request.push(i);
            }
            black_box(request);
        }
    });
}
#[bench]
fn bench_vec_with_capacity(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: Vec<u8> = Vec::with_capacity(4);
            // Inner closure, the actual test
            for i in 0..4 {
                request.push(i);
            }
            black_box(request);
        }
    });
}
#[bench]
fn bench_slice(b: &mut Bencher) {
    b.iter(|| {
        for _ in 0..16 {
            let mut request: [u8; 4] = [0; 4];
            // Inner closure, the actual test
            for i in 0..4 {
                request[i] = i as u8;
            }
            black_box(request);
        }
    });
}

The benchmark result is:

test bench_slice             ... bench:           4 ns/iter (+/- 0)
test bench_vec               ... bench:       1,014 ns/iter (+/- 143)
test bench_vec_with_capacity ... bench:         977 ns/iter (+/- 70)

As you can see, using slice gains a better performance.

Because this repo doesn't have any unit tests, I have to test it by myself, seem it can work well, like:

cargo run --example now /run/clockboundd/clockboundd.sock
The UTC timestamp 2021-12-14 14:24:20.048961000 has the following error bounds.
In nanoseconds since the Unix epoch: (1639491860040783810,1639491860057138190)
In UTC in date/time format: (2021-12-14 14:24:20.040783810, 2021-12-14 14:24:20.057138190)

cargo run --example before /run/clockboundd/clockboundd.sock
1639491872937419000 nanoseconds since the Unix Epoch is not before the current time's error bounds.
Waiting 1 second...
1639491872937419000 nanoseconds since the Unix Epoch is before the current time's error bounds.

cargo run --example after /run/clockboundd/clockboundd.sock
1639491883489757000 nanoseconds since the Unix Epoch is after the current time's error bounds.
Waiting 2 seconds...
1639491883489757000 nanoseconds since the Unix Epoch is not after the current time's error bounds.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

push

siddontang push siddontang/clock-bound

siddontang
siddontang

clock-bound-c: use slice instead of vec for performance

commit sha: 65885e345f259f9fee643056f809b6ee6308db4f

push time in 1 month ago
Activity icon
fork

siddontang forked aws/clock-bound

⚡ Used to generate and compare bounded timestamps.
siddontang Updated
fork time in 1 month ago
Nov
30
1 month ago
pull request

siddontang merge to pingcap/docs

siddontang
siddontang

Update haproxy best practices using haproxy2.5.0

First-time contributors' checklist

What is changed, added or deleted? (Required)

1、移除systemd方式启动、停止haproxy; 2、修改haproxy安装方式;

1、原来使用yum安装haproxy的方式,会导致haproxy版本过旧;

Which TiDB version(s) do your changes apply to? (Required)

1、当前haproxy2.5.0版本使用在tidb 5.3.0版本上;

Tips for choosing the affected version(s):

By default, CHOOSE MASTER ONLY so your changes will be applied to the next TiDB major or minor releases. If your PR involves a product feature behavior change or a compatibility change, CHOOSE THE AFFECTED RELEASE BRANCH(ES) AND MASTER.

For details, see tips for choosing the affected versions.

  • master (the latest development version)
  • v5.3 (TiDB 5.3 versions)
  • v5.2 (TiDB 5.2 versions)
  • v5.1 (TiDB 5.1 versions)
  • v5.0 (TiDB 5.0 versions)
  • v4.0 (TiDB 4.0 versions)
  • v3.1 (TiDB 3.1 versions)
  • v3.0 (TiDB 3.0 versions)
  • v2.1 (TiDB 2.1 versions)

What is the related PR or file link(s)?

  • This PR is translated from:
  • Other reference link(s):

Do your changes match any of the following descriptions?

  • Delete files
  • Change aliases
  • Need modification after applied to another branch
  • Might cause conflicts after applied to another branch
close pull request

siddontang wants to merge pingcap/docs

siddontang
siddontang

Update haproxy best practices using haproxy2.5.0

First-time contributors' checklist

What is changed, added or deleted? (Required)

1、移除systemd方式启动、停止haproxy; 2、修改haproxy安装方式;

1、原来使用yum安装haproxy的方式,会导致haproxy版本过旧;

Which TiDB version(s) do your changes apply to? (Required)

1、当前haproxy2.5.0版本使用在tidb 5.3.0版本上;

Tips for choosing the affected version(s):

By default, CHOOSE MASTER ONLY so your changes will be applied to the next TiDB major or minor releases. If your PR involves a product feature behavior change or a compatibility change, CHOOSE THE AFFECTED RELEASE BRANCH(ES) AND MASTER.

For details, see tips for choosing the affected versions.

  • master (the latest development version)
  • v5.3 (TiDB 5.3 versions)
  • v5.2 (TiDB 5.2 versions)
  • v5.1 (TiDB 5.1 versions)
  • v5.0 (TiDB 5.0 versions)
  • v4.0 (TiDB 4.0 versions)
  • v3.1 (TiDB 3.1 versions)
  • v3.0 (TiDB 3.0 versions)
  • v2.1 (TiDB 2.1 versions)

What is the related PR or file link(s)?

  • This PR is translated from:
  • Other reference link(s):

Do your changes match any of the following descriptions?

  • Delete files
  • Change aliases
  • Need modification after applied to another branch
  • Might cause conflicts after applied to another branch
siddontang
siddontang

please remove the redundant space

make -j 8 TARGET=linux-glibc USE_THREAD=1
Nov
20
2 months ago
close pull request

siddontang wants to merge tikv/tikv

siddontang
siddontang

raftstore: increase batch for raftlog-gc-worker (release-5.1-20211115)

What problem does this PR solve?

Issue Number: close https://github.com/tikv/tikv/issues/11404

What is changed and how it works?

Batch more deleted keys together. As we all know ,the deleted key must be much small than the entry which has been put. So 256 is too small for every writebatch in raftlog-gc-worker. And the gc tasks of most of regions are small. They are often smaller than 100. So I batch the keys of multiple regions and write them together.

Related changes

  • PR to update pingcap/docs/pingcap/docs-cn:
  • PR to update pingcap/tidb-ansible:
  • Need to cherry-pick to the release branch

Check List

Tests

  • Unit test

Side effects

  • Performance regression
    • Consumes more CPU
    • Consumes more MEM
  • Breaking backward compatibility

Release note

None
siddontang
siddontang

can we try delete_range to speed up this?

pull request

siddontang merge to tikv/tikv

siddontang
siddontang

raftstore: increase batch for raftlog-gc-worker (release-5.1-20211115)

What problem does this PR solve?

Issue Number: close https://github.com/tikv/tikv/issues/11404

What is changed and how it works?

Batch more deleted keys together. As we all know ,the deleted key must be much small than the entry which has been put. So 256 is too small for every writebatch in raftlog-gc-worker. And the gc tasks of most of regions are small. They are often smaller than 100. So I batch the keys of multiple regions and write them together.

Related changes

  • PR to update pingcap/docs/pingcap/docs-cn:
  • PR to update pingcap/tidb-ansible:
  • Need to cherry-pick to the release branch

Check List

Tests

  • Unit test

Side effects

  • Performance regression
    • Consumes more CPU
    • Consumes more MEM
  • Breaking backward compatibility

Release note

None