Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPNS very slow #3860

Open
nezzard opened this issue Apr 11, 2017 · 102 comments
Open

IPNS very slow #3860

nezzard opened this issue Apr 11, 2017 · 102 comments
Labels
need/analysis Needs further analysis before proceeding topic/ipns Topic ipns topic/perf Performance

Comments

@nezzard
Copy link

nezzard commented Apr 11, 2017

Hi, is it normal, that ipns loading very slow?
I tried to make something like cms with dynamic content, but ipns to slow, when i load site via ipns, first loading is very slow, if after that i reload page it's load quickly. But if i reload after few minutes, it's again load slow.

@whyrusleeping
Copy link
Member

@nezzard This is generally a known issue, but providing more information is helpful. Are you resolving from your local node? Or are you resolving through the gateway?

@nezzard
Copy link
Author

nezzard commented Apr 11, 2017

@whyrusleeping throw local, but sometimes gateway faster, sometimes local faster
So, for now I can't use ipns normally?

@whyrusleeping
Copy link
Member

@nezzard when using locally, how many peers do you have connected? (ipfs swarm peers) The primary slowdown of ipns is connecting to enough of the right peers on the dht, once thats warmed up it should be faster.

DHT based ipns isnt as fast as something more centralized, but you can generally cache the results for longer than ipfs caches them. We should take a look at making these caches more configurable, and look into other ipns slowdowns.

When you say its 'very slow', what time range exactly are you experiencing? 1-5 seconds? 5-10, 10+ ?

@nezzard
Copy link
Author

nezzard commented Apr 12, 2017

@whyrusleeping Sometimes it's really fast, sometimes i have
https://yadi.sk/i/mL6Q4OFX3Gu2nk

Ipfs swarm
/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ /ip4/104.131.180.155/tcp/4001/ipfs/QmeXAm1zdLbPaA9wVemaCjbeJgWsCrH4oSCrK2F92yWnbm /ip4/104.133.2.68/tcp/53366/ipfs/QmTAmvzNBsicnajpLTUnVqcPankP3pNDoqHpAtUNkK2rU7 /ip4/104.155.150.120/tcp/4001/ipfs/Qmep8LtipXUG4WSNgJGEtwmuaQQt77wRDL5nkMpZyDqrD3 /ip4/104.236.169.138/tcp/4001/ipfs/QmYodPH2C6xEYFPxNhK4how1frPdXFWVrZ3QGynTFCFfBe /ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRadHn95W2CrSFmZuTdDWP8HXaHca9z /ip4/104.236.176.59/tcp/4001/ipfs/QmQ8MYL1ANybPTM5uamhzTnPwDwCFgfrdpYo9cwiEmVsge /ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM /ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64 /ip4/104.40.212.43/tcp/4001/ipfs/QmcvFeaip7B3RDmLU9MgqGcCRv881Citnv5cHkrTSusZD6 /ip4/106.246.181.100/tcp/4001/ipfs/QmQ6TbUShnjKbnJDSYdxaBb78Dz6fF82NMetDKnau3k7zW /ip4/108.161.120.136/tcp/27040/ipfs/QmNRM8W3u6gxAvm8WqSXqCVC6Wzknq66tdET6fLGh8zCVk /ip4/108.28.144.234/tcp/5002/ipfs/QmWfjhgBWjwiesWQPCC4CSV4q83vyBdSA6LRSaZLLCZoVH /ip4/112.196.16.84/tcp/4002/ipfs/QmbELjeVvfpbGYNcC4j4PPr6mnssp6jKWd4D6Jht8jDhiW /ip4/113.253.98.194/tcp/54388/ipfs/QmcL9BdiHQbRng6PvDzbJye7yG73ttNAkhA5hLGn22StM8 /ip4/121.122.82.230/tcp/58960/ipfs/QmPz9uv4HUP1er5TGaaoc4NVCbN8VFMrf5gwvxfmtSAmGv /ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu /ip4/128.32.112.184/tcp/4001/ipfs/QmeM9rJsk6Ke57xMwMuCkJBb9pYGx7qVRkgzVD6zhxPaBx /ip4/128.32.153.243/tcp/1030/ipfs/QmYoH11GjCyoQW4HyZtSZcL8BqBuudaXWi1pdYyy1AroFd /ip4/134.71.135.172/tcp/4001/ipfs/QmU3q7GxnnhJabNh3ukDq2QsnzwzVpcT5FEPBcJcRu3Wq1 /ip4/138.201.53.216/tcp/4001/ipfs/QmWmJfJKfJmKtRqsTnygmWgJfsmHnXo4p3Uc1Atf8N5iQ5 /ip4/139.162.191.34/tcp/4001/ipfs/QmYfmBh8Pud13uwc5mbtCGRgYbxzsipY87xgjdj2TGeJWm /ip4/142.4.211.131/tcp/4001/ipfs/QmWWWLYe16uU53wPgdP3V5eEb8QRwoqUb35h5EMWoEyWaJ /ip4/159.203.77.184/tcp/4001/ipfs/QmeLGqhi5dFBpxD4xuzAWWcoip69i5SaneXL9Jb83sxSXo /ip4/163.172.222.20/tcp/4001/ipfs/Qmd4up4kjr8TNWc4rx6r4bFwpe6TJQjVVmfwtiv4q3FSPx /ip4/167.114.2.68/tcp/4001/ipfs/QmfY24aJDGyPyUJVyzL1QHPoegmFKuuScoCKrBk9asoTFG /ip4/168.235.149.174/tcp/4001/ipfs/QmbPFhS9YwUxE4rPeaqd7Vn6GEESd1MUUM67ECtYchHyFB /ip4/168.235.79.131/tcp/4001/ipfs/QmaqsmhXtQfKfiWi3jXdb4PxrN8JNi2zmXN13MDEktjK8H /ip4/168.235.90.18/tcp/4001/ipfs/QmWtA6WFyo44pYzQzHFtrtMWPHZiFEDFjUWihEY49obZ1e /ip4/169.231.33.236/tcp/55897/ipfs/QmQyTC3Bg2BkctdisKBvWPoG8Avr7HMrnNMNJS25ubjVUU /ip4/173.95.181.110/tcp/42615/ipfs/QmTxQ2Bv9gppcNvzAtRJiwNAahVhkUHxFt5mMYkW9qPjE6 /ip4/176.9.85.5/tcp/4001/ipfs/QmNUZW8yuNxdLSPMwvaafiMVN8fof5r2PrsUJAgyAn8Udb /ip4/178.19.251.249/tcp/4401/ipfs/QmR2FRyigN82VJc3MFZNz79L8Hunc3XvfAxU3eA3McRPHg /ip4/178.209.50.28/tcp/30852/ipfs/QmVfwJUWnj7GAkQtV4cDVrNDnZEwi4oxnyZaJc7xY7zaN3 /ip4/178.209.50.28/tcp/36706/ipfs/QmWCNyBxJS9iuwCrnnA3QfcrS9Yb67WXnZTiXZsMDFj2ja /ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3pNLQpmmEk35v6wYtsMGLzSr5QBU3 /ip4/180.181.245.242/tcp/4001/ipfs/QmZg57eGmSgXs8cXeGJNsBknZTxdphZH9wWLDx8TdBQrMY /ip4/185.10.68.111/tcp/4001/ipfs/QmTNjTQy6sGFG39VSunS4v1UZRfPFevtGzHwr2h1xfa5Bh /ip4/185.21.217.59/tcp/4001/ipfs/QmQ4GzeQzyW3VcBgVacKSjrUrBxEo6s7VQrrkQyQwi1sxs /ip4/185.32.221.138/tcp/4001/ipfs/QmcmTqKUdasx9xwbG2DcyY95q6GcMzx8uUC9fVqdTyETrZ /ip4/185.61.148.187/tcp/4001/ipfs/QmQ5k9N7aVGECaNBLsX9ZeYJCYvcNWcKDZ8VacV9HGUwSC /ip4/185.97.214.103/tcp/4001/ipfs/QmbKGbNNyvBe6A7kUYQtUpXZU61QiTMnGGjqBx6zuvrYyj /ip4/188.226.129.60/tcp/4001/ipfs/QmWBthnxqH6CpAA9k9XGP9TqWMZGT6UC2DZ4x9qGr7eapc /ip4/188.25.26.115/tcp/32349/ipfs/QmVUR2mtHXCnm7KVyEjBQe1Vdp8XWG6RXC8f8FfrnAxCGJ /ip4/188.25.26.115/tcp/53649/ipfs/QmXctexVWdB4PqAquZ6Ksmu1FwwRMiYhQNfoiaWV4iqEFn /ip4/188.40.114.11/tcp/4001/ipfs/QmZY7MtK8ZbG1suwrxc7xEYZ2hQLf1dAWPRHhjxC8rjq8E /ip4/188.40.41.114/tcp/4001/ipfs/QmUYYq1rYdhmrU7za9zrc6adLmwFBKYx3ksTVU3y1RHomm /ip4/192.124.26.250/tcp/16808/ipfs/QmUnwLT7GK8yCxHrpEELTyHVwGFhiZFwmjrq3jypG9n1k8 /ip4/192.124.26.250/tcp/21486/ipfs/QmeBT8g5ekgXaF4ZPqAi1Y8ssuQTjtzacWB7HC7ZHY8CH7 /ip4/192.131.44.99/tcp/4001/ipfs/QmWQBr5KAnCpGiQa5888DYsJc4gF7x7SDzpT6eVW2SoMMQ /ip4/192.52.2.2/tcp/4001/ipfs/QmeJENdKrdD8Bcj6iSrYPAwfQpR2K1nC8aYFkZ7wXdN9ic /ip4/194.100.58.189/tcp/4001/ipfs/QmVPCaHpUJ2eKVMSgb54zZhYRUKokNsX32C4PSRWiKWY6w /ip4/194.135.91.244/tcp/4001/ipfs/QmbE4S5EBBuY7du97ARD3BizNqpdcwQ3iH1aGyo5c8Ezmb /ip4/195.154.182.94/tcp/1031/ipfs/QmUSfsmVqD8TTgnUcPDTrd24SbWDEpnmkWWr7eqbJT2g8y /ip4/199.188.101.24/tcp/4001/ipfs/QmSsjprNEhoDZJAZYscB4G23b1dhxJ1cmiCdC5k73N8Jra /ip4/204.236.253.32/tcp/4001/ipfs/QmYf9BoND8MCHfmzihpseFc6MA6JwBV1ZvHsSMPJVW9Hww /ip4/206.190.135.76/tcp/4001/ipfs/QmTRmYCFGJLz2s5tfnHiB1kwrfrtVSxKeSPxojMioZKVH6 /ip4/212.227.249.191/tcp/4001/ipfs/QmcZrBqWBYV3RGsPuhQX11QzpKAQ8SYfMYL1dGXuPmaDYF /ip4/212.47.243.156/tcp/4001/ipfs/QmPCfdoA8aDscrfNVAhB12YYJJ2CR9mDG2WtKYFoxwL182 /ip4/213.108.213.138/tcp/4001/ipfs/QmWHo4hLG3tkmfuCot3xGCzE2a822MCNQ1mAx1tdEXVL46 /ip4/213.32.16.10/tcp/4001/ipfs/QmcWjSF6prpJwBZsfPSfzGEL61agU1vcMNCX8K6qaH5PAq /ip4/217.210.239.98/tcp/48069/ipfs/QmWGUTL6pQe4ryneBarFqnMdFwTq847a2DnWNo4oYRHxEJ /ip4/217.234.48.60/tcp/65012/ipfs/QmPPnZRcPCPxDvqgz3nyg5QshSzCzqa837ABFU4H4ZzUQP /ip4/23.250.20.244/tcp/4001/ipfs/QmUgNCzhgGvjn9DAs22mCJ7bv3sFp6PWPD6Egt9aPopjVn /ip4/34.223.212.29/tcp/1024/ipfs/QmcXwJ34KM17jkwYwGjgUFvG7zBgGGnUXRYCJdvAPTc8CB /ip4/35.154.222.183/tcp/4001/ipfs/Qmecb2A1Ki34eb4jUuaaWBH8A3rRhiLaynoq4Yj7issF1L /ip4/37.187.116.23/tcp/4001/ipfs/QmbqE6UfCJaXST3i65zbr649s8cJCUoP9m3UFUrXcNgeDn /ip4/37.187.98.185/tcp/1045/ipfs/QmS7djjNercLL4R4kbEjs6eGtxmAiuWMwnvAhP6AkFB64U /ip4/37.205.9.176/tcp/4001/ipfs/QmdX1zPzUtGJzcQm2gz6fyiaX7XgthK5d4LNSJq3rUAsiP /ip4/40.112.223.87/tcp/4001/ipfs/QmWPSzKERs6KAjb8QfSXViFqyEUn3VZYYnXjgG6hJwXWYK /ip4/45.32.155.49/tcp/4001/ipfs/QmYdn8trPQMRZEURK3BRrwh2kSMrb6r6xMoFr1AC1hRmNG /ip4/45.63.24.86/tcp/4001/ipfs/Qmd66qwujno615ZPiJZYTm12SF1c9fuHcTMSU9mA4gvuwM /ip4/49.77.250.124/tcp/20540/ipfs/QmPXWsm3wCRdyTZAeu4gEon7i1xSQ1QsWsR2X6GpAB3x6r /ip4/5.186.55.132/tcp/1024/ipfs/QmR1mXyic9jSbyzLtnBU9gjbFY8K3TFHrpvJK88LSyPnd9 /ip4/5.28.92.193/tcp/4001/ipfs/QmZ9RMTK8YrgFY7EaYsWnE2AsDNHu1rm5LqadvhFmivPWF /ip4/5.9.150.40/tcp/4737/ipfs/QmaeXrsLHWm4gbjyEUJ4NtPsF3d36mXVzY5eTBQHLdMQ19 /ip4/50.148.88.236/tcp/4001/ipfs/QmUeaH7miiLjxneP3dgJ7EgYxCe6nR16C7xyA5NDzBAcP3 /ip4/50.31.11.244/tcp/4001/ipfs/QmYMdi1e6RV7nJ4xoNUcP4CrfuNdpskzLQ6YBT4xcdaKAV /ip4/50.53.255.232/tcp/20792/ipfs/QmTaqVy1m5MLUh2vPSU64m1nqBj5n3ghovXZ48V6ThLiLj /ip4/51.254.25.17/tcp/4002/ipfs/QmdKbeXoXnMbPDfLsAFPGZDJ41bQuRNKALQSydJ66k1FfH /ip4/52.168.18.22/tcp/9001/ipfs/QmV9eRZ3uJjk461cWSPc8gYTCqWmxLxMU6SFWbDjdYAsxA /ip4/52.170.218.157/tcp/9001/ipfs/QmRZvZiZrhJdZoDruT7w2QLKTdniThNwrpNeFFdZXAzY1s /ip4/52.233.193.228/tcp/4001/ipfs/QmcdQmd42P3Mer1XQrENkpKEW9Z97ucBb5iw3bEPqFnqHe /ip4/52.53.224.174/tcp/4001/ipfs/QmdhVq4BHYLmrsatWxw8FHVCspdTabdgptUaGxW2ow2F7Q /ip4/52.7.58.3/tcp/4001/ipfs/QmdG5Y7xqrtDkVjP1dDuwWvPcVHQJjyJqG5xK62VzMth2x /ip4/54.178.171.10/tcp/4091/ipfs/QmdtfJBMitotUWBX5YZ6rYeaYRFu6zfXXMZP6fygEWK2iu /ip4/54.190.54.51/tcp/4001/ipfs/QmZobm32XH2UiGi5uAg2KabEh6kRL6x64HB56ZF3oA4awR /ip4/54.208.247.108/tcp/4001/ipfs/QmdDyCsGm8Zzv4uyKB4MzX8wP7QDfSfVCsCNMZV5UxNgJd /ip4/54.70.38.180/tcp/1024/ipfs/QmSHCEevPPowdJKHPwivtTW6HsShGQz5qVrFytDeW1dHDv /ip4/54.70.48.46/tcp/1030/ipfs/QmeDcUc9ytZdLcuPHwDNrN1gj415ZFHr27gPgnqJqbf1hg /ip4/54.71.244.118/tcp/4001/ipfs/QmaGYHEnjr5SwSrjP44FHGahtdk3ShPf3DBYmDrZCa1nbS /ip4/54.89.97.141/tcp/4001/ipfs/QmRjxYdkT4x3QpAWqqcz1wqXhTUYrNBm6afaYGk5DQFeY8 /ip4/58.179.165.141/tcp/4001/ipfs/QmYoXumXQYX3FknhH1drVhgqnJd2vQ1ExECLAHykA1zhJZ /ip4/63.96.220.210/tcp/4001/ipfs/QmX4SxZFMgds5b1mf3y4KKHsrLijrFvKZ6HfjZN6DkY4j5 /ip4/65.19.134.242/tcp/4001/ipfs/QmYCLRXcux9BrLSkv3SuGEW6iu7nUD7QSg3YVHcLZjS5AT /ip4/66.56.15.111/tcp/4001/ipfs/QmZxW1oKFYNhQLjypNtUZJqtZMvzk1JNAQnfGLczan2RD2 /ip4/67.174.159.210/tcp/4001/ipfs/QmRNuP6GpZ4tAMvfgXNeCB6At4uRGqqTXBusHRxFh5n8Eq /ip4/69.12.67.106/tcp/4001/ipfs/QmT1q92VyoqysvC268kegsdxeNLR8gkEgpFzmnKWfqp29V /ip4/69.61.33.241/tcp/4001/ipfs/QmTtggHgG1tjAHrHfBDBLPmUvn5BwNRpZY4qMJRXnQ7bQj /ip4/69.62.223.164/tcp/4001/ipfs/QmZrzE3Gye318CU7ZsZ3YeEnw6L7RkbhBvmfU7ebRQEF54 /ip4/71.204.170.241/tcp/4001/ipfs/QmTwvAzEoWZjFAsv9rhXrcn1XPb7qhxDVZN1Q61AnZbqmM /ip4/72.177.11.53/tcp/4001/ipfs/QmPxFX8j1zbHNzLgmeScjX7pjKho2EgzGLaiANFTjLUAb4 /ip4/75.112.252.166/tcp/11465/ipfs/QmRWC4hgiM7Tzchz2uLAN6Yt1xWptqZWYPb5AWvv2DeMhp /ip4/78.46.68.56/tcp/53378/ipfs/QmbE9eo6PXuSHAASumNVZBKvPsVpSjgRDEqoMNHJ49cBKz /ip4/78.56.33.225/tcp/4001/ipfs/QmXokcQHHxSCNZgFv28hN7dTzxbLcXpCM1MUDRXa8G9wNK /ip4/79.175.125.102/tcp/58126/ipfs/QmdDA6QfLQ5sRez6Ev15yDCdumvBuYygeNjVZqFef693Gn /ip4/80.167.121.206/tcp/4001/ipfs/QmfFB7ShRaVPEy9Bbr9fu9xG947KCZqhCTw1utBNHBwGK2 /ip4/82.119.233.36/tcp/4001/ipfs/QmY3xH9PWc4NpmupJ9KWE4r1w9XshvW6oGVeHAApuvVU2K /ip4/82.197.194.135/tcp/41271/ipfs/QmQLW2mhJYPmhYmhkA2FZwFGdEXFjnsprB5DfBxCMRdBk9 /ip4/82.227.20.27/tcp/50190/ipfs/QmY8bMNkkNZvxw1pGVi4pqiXeszZnHY9wwr1Qvyv6QmfsE /ip4/84.217.19.85/tcp/62227/ipfs/QmaD38nfW4u97DPHDLz1cYWzhWUYPKrEianJs2dKctutpf /ip4/84.217.19.85/tcp/63787/ipfs/QmXKd1pJxTqTWNgGENcX2daiGLgWRPDDsXJe8eecQCr6Vh /ip4/86.0.212.51/tcp/50000/ipfs/Qmb9ECxYmPL9sc8jRNAwpGhgjEiXVHKb2qfS8jtjN5z7Pp /ip4/88.153.7.190/tcp/17396/ipfs/QmWTyP5FFpykrfocJ14AcQcwnuSdKAnVASWuFbtqCw3RPT /ip4/88.198.52.13/tcp/4001/ipfs/QmNhwcGyu8pyCHzHS9SuVyVNbg8SjpTKyFb72oofvL4Nf5 /ip4/88.99.13.90/tcp/4001/ipfs/QmTCM4KLAF1xG4ri2JBRigmjf8CLwAzkTs6ckCQbHaArR6 /ip4/89.23.224.58/tcp/37305/ipfs/QmWqjusr86LThkYgjAbNMa8gJ55wzVufkcv5E2TFfzYZXu /ip4/89.64.51.138/tcp/47111/ipfs/Qme63idhHJ2awgkdG952iddw5Ta9nrfQB3Bpn83V1Bqgvv /ip4/91.126.106.78/tcp/21076/ipfs/QmdFZQdcLbgjK5uUaJS2EiKMs4d2oke1DdyGoHAKRMcaXk /ip4/92.222.85.0/tcp/4001/ipfs/QmTm7RdPXbvdSwKQdjEcbtm4JKv1VebzJR7RDra3DpiWd7 /ip4/93.11.115.24/tcp/34730/ipfs/QmRztqxTvxvQXWi7JbtTXijzzngpDgVYwQ2YBccVkt7qjn /ip4/93.182.128.2/tcp/39803/ipfs/Qma8oBW3GNWvNbdEzWiNWenrGtF3DhDUBcUrrsTJBiNKJ2 /ip4/95.31.15.24/tcp/4001/ipfs/QmPxgtHFqyAdby5oqLT5UJGMjPFyGHu5zQcpZ1sKYcuX75 /ip4/96.84.144.177/tcp/4001/ipfs/Qma7U9CNhPnfLit2UL88CFKvizFCZ7pnxB38N3Y5WsZwFH

@Kubuxu Kubuxu added topic/ipns Topic ipns need/analysis Needs further analysis before proceeding topic/perf Performance labels Apr 17, 2017
@Kubuxu
Copy link
Member

Kubuxu commented Apr 17, 2017

Which ipfs version are you running?

@kikoncuo
Copy link

@nezzard What tool are you using in your screenshot? I've seen it many times in the forums but I can't find it anywhere.

@nezzard
Copy link
Author

nezzard commented Apr 24, 2017

@kikoncuo it's a tool from cloud service like dropbox
https://disk.yandex.ua/

@nezzard
Copy link
Author

nezzard commented Apr 24, 2017

@Kubuxu The last at the time

@kikoncuo
Copy link

@nezzard I meant the tool which you took the screenshot from, my bad

@nezzard
Copy link
Author

nezzard commented Apr 25, 2017

@Kubuxu this tool inside the program yandex disk

@cpacia
Copy link

cpacia commented Apr 27, 2017

So let me tell you some tweaks I've made which has helped quite a bit.

  1. I made the dht query size param accessible from the config. Setting it to like 5 or 6 speeds it up quite a bit.

  2. I also added some caching into the resolver so that if it can't find a record on the network (such as it expiring) it loads it from local cache. Obviously each record that is fetched updates the cache. This isn't really speed related but it does provide a slightly better UX as data remains available after it drops out of the dht.

  3. Using go install no buildable Go source files in /home/bussiere/go-ipfs #2 for certain types of data where it doesn't matter if it's slightly stale, like profiles, I load the record from cache and use it to return the profile. Then in the background I do the IPNS call to fetch the latest profile and update the cache. This ensures that our profile calls are nearly instant while potentially being only slightly out of date.

@whyrusleeping
Copy link
Member

We can probably add flags to the ipfs name resolve api that allow selection (per resolve) of the query size parameter, and also to say "just give me whatever value you have cached".

Both of those would be simple enough to implement without actually having to change too much

@whyrusleeping
Copy link
Member

Another thing we could do it have a command that returns ipns results as they come in, and then when enough comes in to make a decision, says "This is the best one". This way you could start working with the first one you receive, then when the right one comes in, switch to using that

@MichaelMure
Copy link
Contributor

I have some trouble as well with IPNS. I have a linux box and a windows box on the same LAN running ipfs 0.4.9 and I can't resolve IPNS addresses published from the other side, even after several minutes. I have 400 peers connected on one side, 250 on the other.

@cpacia your changes are in a branch somewhere ? That looks like a very handy addition for my project.

@MichaelMure
Copy link
Contributor

Answering to myself, the fork is here: https://github.com/OpenBazaar/go-ipfs

@whyrusleeping any idea how I can debug this issue ?

@whyrusleeping
Copy link
Member

@MichaelMure you cant resolve at all? Or its just very slow?

@MichaelMure
Copy link
Contributor

Sometimes it just take times before being able to resolve and once it has been resolved once it works properly. But in this case it didn't resolve at all even after 30 minutes. It might be another issue but without a way to find out what's going on in ipfs, well ...

@nezzard
Copy link
Author

nezzard commented Jun 21, 2017

I think ipns is very bad for use
You can check
http://ipfs.artpixel.com.ua/

It's load for 15 - 20 seconds

@hhff
Copy link

hhff commented Jul 1, 2017

I'm also experiencing massive resolution times with IPNS. Same behavior over here - the first resolution can take multiple minutes, then once a it's loaded, I can refresh the content in under a second.

If I leave it for a few minutes, then do another refresh, and that request cycle repeats the same behavior.

The "cache" for the resolution only appears to stay warm for a short period of time.

@hhff
Copy link

hhff commented Jul 1, 2017

I'm using a CNAME with _dnslink, for what it's worth.

Content is at www.ember-cli-deploy-ipfs.com

@alexandre1985
Copy link

Ipfs is unusable. I have the daemon running on both of my computers inside a LAN and one "serving" one file (a video) that the other doesn't have. When I try to access that video from the pc that doesn't have the file, using localhost:8080/ipfs/... on my browser, the video is stopping and taking huge amount of times to load. HUGE amount of time in such a way that I can't watch the video.
When I netcat that video and pipe it through mplayer to the other computer I can watch the video stream great.
So this is a problem of ipfs and it has great great performance issues. So great that it doesn't make sense and makes the technology not worth using (as today 2017-08-24).
IPFS isn't delivering what it promised. Very disappointed

@whyrusleeping whyrusleeping added this to the Ipfs 0.4.12 milestone Sep 2, 2017
@kesar
Copy link

kesar commented Sep 2, 2017

Very disappointed

You should ask for a refund 👍

@alexandre1985
Copy link

alexandre1985 commented Sep 3, 2017

@kesar I mean this out of love. @jbenet (Juan Benet) says that it is going to release us from the backbone but currently ipfs network performance is very weak.
I would like ipfs to succeed but how can that be if I can see a video faster through the backbone than through ipfs hosting video file inside my LAN?
The performance of ipfs in this aspect is weak, to be modest. You should try this experiment yourself

@Calmarius
Copy link

It took me more than a minute to resolve the domain published by my own computer... And it's not the DNS resolution it hangs at resolving the actual IPNS entry.

$ time ipfs resolve /ipns/QmQqR8R9nfFkWYH9P7xNPtAry8tT63miNyZwt121uXsmSU
/ipfs/QmQunuPzcLp2FiKwMDucJi957SrB8BygKA4C4J4h7VG4M9

real	1m0.078s
user	0m0.060s
sys	0m0.008s

@Stebalien
Copy link
Member

We're working on fixing some low hanging fruit in the DHT that should alleviate this: libp2p/go-libp2p-kad-dht#88. You can expect this to appear in a release month or so (4.12 or 4.13).

We're also working on bypassing the DHT for recently accessed IPNS addresses by using pubsub ( #4047). However, that will likely remain under an experimental flag for a while as our current pubsub implementation is very naive.

@linas
Copy link

linas commented Oct 21, 2019

Hi @dboreham OK, so I ran tcpdump on eth0 and lo twice; once while ipfs daemon is up but idle, and a second time while it's in use.

So, first on the idle system: Lots of traffic to/from 4001 and 10001. These consist entirely of SYN RST and ACK packets, from localhost to itself, IPv4 and IPv6, to/from ports 4001 and 10001 -- maybe a dozen every second, all the time. No data is being transferred in these packets. They're empty. The timing is stochastic, the time intervals are irregular. It would appear that something is opening sockets to the ipfs daemon and then resetting and closing them immediately. Over and over and over. The only something should be the ipfs daemon itself, so this is ... bizarre ... (in 5 minutes, I managed to capture exactly one packet between my port 5001 and an outside-world host. So my ipfs daemon is interacting with the outside world, just not very much, when sitting idle).

Next, I see dozens of ICMP port-unreachables emanating from port 4001 to 169.254.x.x addresses which tells me that local addresses are incorrectly leaking out into the IPFS protocols. (I don't use 169.254, I use 10.x.x.x for the internal LAN, so these addrs are not mine.) There were over a dozen distinct different 169.254 addrs visible.

I see MDNS responses exactly 10 seconds apart, to within a millisecond. The response includes my self key CID. The response packets always include all local host IP's, including the IP's of the various LXC containers on the host; all have 10.x.x.x entries. They're all marked [Unsolicited: True] so I'm not sure what purpose these are supposed to be serving.

Next, I run my IPFS client. It generates a small handful of POST /api/v0/add and POST /api/v0/object/patch/add-link. Then it generates the publish, and then exactly nothing happens (excluding the above-described garbage traffic) ... nothing happens for exactly 90 seconds, down to the millisecond. The publish was this:

    POST /api/v0/name/publish?stream-channels=true&json=true&encoding=json&arg=QmVKsgztubmcYzC8UMVvmN7duqNMvkQXyUuEvodmWsfVJD&key=xfoobar-key&lifetime=4h&ttl=30s HTTP/1.1\r\n
    Host: localhost:5001\r\n
    User-Agent: cpp-ipfs-api\r\n
    Accept: */*\r\n
    Content-Length: 0\r\n
    \r\n
    [Full request URI: http://localhost:5001/api/v0/name/publish?stream-channels=true&json=true&encoding=json&arg=QmVKsgztubmcYzC8UMVvmN7duqNMvkQXyUuEvodmWsfVJD&key=xfoobar-key&lifetime=4h&ttl=30s]
    [HTTP request 1/1]
    [Response in frame: 1709]

and exactly 90 seconds later:

Frame 1709: 71 bytes on wire (568 bits), 71 bytes captured (568 bits)
Ethernet II, Src: 00:00:00:00:00:00, Dst: 00:00:00:00:00:00
Internet Protocol Version 4, Src: 127.0.0.1, Dst: 127.0.0.1
Transmission Control Protocol, Src Port: 5001, Dst Port: 35584, Seq: 474, Ack: 251, Len: 5

etc...
Hypertext Transfer Protocol
    HTTP/1.1 200 OK\r\n
    Access-Control-Allow-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length\r\n
    Access-Control-Expose-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length\r\n
    Content-Type: application/json\r\n
    Server: go-ipfs/0.4.22\r\n
    Trailer: X-Stream-Error\r\n
    Vary: Origin\r\n
    Date: Mon, 21 Oct 2019 17:15:17 GMT\r\n
    Transfer-Encoding: chunked\r\n
    \r\n

That is a totally boring totally ordinary 200 OK response.

If you read the above, you may spot a ttl=30. I can confirm that the 90-second delay has nothing at all to do with the ttl setting (I tried ttl's of 5, 10, and 115, no change.)

There is no traffic on port 53; nothing is making any DNS queries on the system.

So: to summarize: nothing unusual, except for a fairly large number of completely bogus SYN/RST packets that do nothing at all. Except for this garbage, the ipfs daemon is effectively completely idle, sending nothing, receiving nothing. The 90 second timer, whatever it is, is in the ipfs daemon.

@aschmahmann
Copy link
Contributor

@linas A few thoughts on what you've run into:

Publishes with the command-line tools always take 60s exactly.

I'm pretty sure this is just you hitting the DHT timeout. DHT publishes are taking a while and this is a known issue.

If I resolve locally, the resolution is instant, either with command-line tool OR with API code.
Unless I interleave another publish in the meanwhile, in which case the resolve now takes 60 seconds! (But not always, sometimes it's instant!)

What's happening here is that namesys has an internal cache that it uses that keeps published records for 1 minute by default. This means that local resolution would be non-instantaneous if you waited more than a minute between publishing and resolving.

If you know that the latest record is on your machine (e.g. because you published it), you can always resolve quickly by just passing the --offline to ipfs name resolve (btw --offline work for publishing as well if all you're interested in is local node operation).

@linas
Copy link

linas commented Oct 28, 2019

Hi @aschmahmann continuing the conversation:

@linas A few thoughts on what you've run into:

Publishes with the command-line tools always take 60s exactly.

I'm pretty sure this is just you hitting the DHT timeout. DHT publishes are taking a while and this is a known issue.

Isn't this that issue? Or is there some other issue? Do you have the issue # for it?

Anyway, what you say is symptomatic of the general confusion here. If I both publish and resolve locally, I cannot imagine any valid reason why either operation would stall, for any reason. By "publishing locally", I mean that I contact the ipfs demon running on localhost, with the URL http://localhost:5001/api/v0/name/publish. Now, it might take that daemon quit a while to announce the new publish to the whole world, but there's no reason that I know of (can think of) why it could not return immediately (and with a status code of "success"). Why is there a timeout needed for this operation, and how could it ever not be successful?

For resolution, similar remarks apply: if I am trying to resolve a name that I published locally, just seconds earlier, I see no reason at all why the ipfs daemon cannot instantly respond to the http://localhost:5001/api/v0/name/resolve request. The daemon knows for a fact, that the publish that it has is authoritative. It knows that it is authoritative, because it is signed by the local private key. Sure, there might be someone else on the other side of the planet using that same private key also performing publishes, at the same time as I am, but it would be absurd to think that this might be happening, and to wait on that possibility.

What's happening here is that namesys has an internal cache that it uses that keeps published records for 1 minute by default. This means that local resolution would be non-instantaneous if you waited more than a minute between publishing and resolving.

I think you are talking about the TTL parameter. Right now, the docs are silent about the default value of the TTL parameter. Maybe it's one minute. However, if I change the TTL parameter to, say, 5 minutes, then the timeout is still 60 seconds. So this "explanation" cannot be correct. In any case, since my local daemon is the authority on the publish, the TTL would not apply to it. The TTL is intended for everyone else, and never for the authority (because, duhh, the authority always knows the answer, and never needs to ask anyone for it!)

If you know that the latest record is on your machine (e.g. because you published it), you can always resolve quickly by just passing the --offline to ipfs name resolve (btw --offline work for publishing as well if all you're interested in is local node operation).

There is no "offline" parameter in the documentation. Please look at the docs: https://github.com/ipfs/interface-js-ipfs-core/blob/master/SPEC/NAME.md#nameresolve -- the only parameters are recursive and nocache.

@aschmahmann
Copy link
Contributor

@linas

Isn't this that issue?

Yes. For example see above, #3860 (comment). There are probably some related issues in go-libp2p-kad-dht as well.

{For ipfs name publish} Why is there a timeout needed for this operation, and how could it ever not be successful?

What you are running into here is the difference between a synchronous and asynchronous network operation. Two use cases:

Synchronous: I'd like to make some data accessible to you. I'd like to be confident that once I'm done running ipfs name publish --key=mykey /ipfs/QmData that I can turn off my machine and as long as someone else is hosting /ipfs/QmData then we're good to go. This could include me paying a third party to host my data, but not wanting to give them my IPNS publishing keys.

Asynchronous: I'd like to use IPNS as a way to address my content in a distributed way and will be doing lots of local work. If it takes a while for data to get published that's fine. If we ever wanted to know whether the data was publicly accessible we would just ask a friend if they could find it, or do a network query that explicitly ignores our local state.

The API is currently designed for the synchronous use case which is plenty useful and valid. If you'd like support for the asynchronous use case that's a new issue you should feel free to open (or even implement).

For resolution, similar remarks apply: ... The daemon knows for a fact, that the publish that it has is authoritative.

This is not correct and does not fully encompass the current and future use cases for IPNS. While IPNS is single writer, there are people who share IPNS keys between devices (IPNS keys are not restricted to the peerID keys, they can be arbitrary asymmetric key pairs). As long as they only edit one device at a time then they can feel reasonably confident in using IPNS to sync say a folder between multiple devices. Again, there is a flag that only looks at your local repo which you can use to achieve your goals.

I think you are talking about the TTL parameter ... However, if I change the TTL parameter to, say, 5 minutes, then the timeout is still 60 seconds ... The TTL is intended for everyone else, and never for the authority

I've already addressed why the TTL applies to everyone (unless you want to use --offline). There are options to modify the DHT timeout if you'd like or to stream the results. Again if you are only concerned about your machine there is an option available for you.

There is no "offline" parameter in the documentation.

It's not in the link you mentioned to js-ipfs-core (this is the go-ipfs repo) which might be a good point. @vasco-santos any idea where the documentation for the flag is?

In go-ipfs the documentation is there, but perhaps it's a bit confusing (if you have suggestions I'd encourage opening an issue or PR to fix it). The CLI docs https://docs.ipfs.io/reference/api/cli/#ipfs show that you can pass --offline to any ipfs command.

@linas
Copy link

linas commented Oct 29, 2019

Ah! now we are getting somewhere! I will try a multi-part reply, as this is getting long. First: sync vs. async. I claim that sync "never makes sense". Either you are doing a publish to http://dotcom.com:5001/api/v0/name/publish and you have a reasonable expectation of dotcom.com staying up 24x7 or you are doing it wrong. If you (some app on your phone) does ipns publish and then you immediately shut down your laptop/phone, then either the OS will kill -9 the ipfs daemon due to lack of response (and so the publish never leaves the device), or you "wtf does my phone not turn off I gotta buy a new phone" and yank the battery (and the publish never leaves the device). If, by happy circumstance, the phone/laptop stayed on long enough, then you never needed the sync publish in the first place. To summarize: sync publish never makes sense.

What other scenarios are there?

@linas
Copy link

linas commented Oct 29, 2019

Part 2: current and future use cases for IPNS. My use-case is this: I want to share keys with dozens or hundreds of others (who are doing compute of some kind, and are publishing results on ipfs). With the current IPNS design, I would need to share millions, up to 100 million keys with them. Why such an obscenely large number? Cause in my current infrastructure, backed by postgres, I have an extremely sparse dataset, of which only a few million or tens-of-millions of entries are non-zero (they follow a Zipfian distribution in size, and connectivity). Each one of my entries is immutable, thus has a unique content hash, but I need to hang time-varying data off of it (probabilities, counts, sensor-readings, etc.) and so I need dozens or hundreds or more processes cooperating to publish the latest readings hanging off of the immutable sign-posts that have been set up. This currently works fine on PostgreSQL and I can hit several K publishes/second (which is slower than I'd like, but whatever). I can point you at the unit tests.

With IPNS/IPFS.. ugh. Clearly, having to share millions of keys with hundreds of peers seems ... the opposite of "decentralized". In some ideal world, enhanced IPNS would do this:

     (PKI public-key, hash) ==> resolved CID

instead of what it currently does:

   PKI public-key ==> resolved CID

That way, I could share only one key with peers, and whenever those peers needed to look up the CID associated with some "well-known hash" they can just do that.

@aschmahmann
Copy link
Contributor

sync publish never makes sense ... What other scenarios are there?

There are plenty of possible scenarios since what you are doing is giving the user an extra piece of information (that the publish completed) that you wouldn't have otherwise. Simple example: If I want to write code in JS but benefit from features or performance of go-ipfs I might have my application spin up a go-ipfs daemon and talk to it over the HTTP API. In this case my application would want to know when it is "safe to close".

@linas Your IPNS use case of needing millions of keys seems highly suspicious to me and is not relevant to this issue. If you could post on https://discuss.ipfs.io/ that would be great, the conversation could be continued there.

@linas
Copy link

linas commented Oct 29, 2019

There are plenty of possible scenarios

!? Give one real example. The history of computer science has been the elimination of sync writes, from the 1970's onward. The invention of interrupts to avoid sync writes of machine status. The invention of caches to avoid sync writes to DRAM. The invention of DMA to avoid sync writes by I/O. The invention of register write-back queues. And that's just hardware. The invention of mutex locks. The invention of software message queues. The invention of publish/subscribe. All of these were driven by the need to eliminate the stalls associated with sync writes. All of these have become very popular precisely because because they avoid sync execution.

I'm kind of frustrated: one didn't have to make sync the default. It could have been async by default. In my code, I have to launch and dettach a thread for each IPNS publish, because I can't afford to wait for a 60/90 second timer to pop in the IPFS code. This bug itself has been open 2.5 years, and has accumulated cruft of me-too's. All discussion on all discussion forums have all basically concluded that "IPNS is broken and lets hunker down and wait until it's fixed". And you are trying to tell me that, no actually, there is a simple fix, nearly trivial -- to change the default (an undocumented API flag) , and everything will work for everybody? Holy cow! Change the default!

I mean, I know what I wrote here sounds hostile, but I don't know how else to put it. You're trying to tell me that "its a feature not a bug", but you really got to consider that pretty much the rest of the world thinks its a bug ...

@vasco-santos
Copy link
Member

It's not in the link you mentioned to js-ipfs-core (this is the go-ipfs repo) which might be a good point. @vasco-santos any idea where the documentation for the flag is?

We were missing the documentation for that option, just created a PR to add it ipfs/js-ipfs#2569. Thanks for the heads up!

In go-ipfs the documentation is there, but perhaps it's a bit confusing (if you have suggestions I'd encourage opening an issue or PR to fix it). The CLI docs https://docs.ipfs.io/reference/api/cli/#ipfs show that you can pass --offline to any ipfs command.

Regarding the CLI in JS, it should also work with --offline

@aschmahmann
Copy link
Contributor

aschmahmann commented Oct 29, 2019

I know what I wrote here sounds hostile

Yes it does, I think if you actually read through this issue you will see that the problem you're complaining about is not the problem this issue is about. Additionally, you have already been told how to get around this problem. As you can see someone has already put up a PR for documentation improvement (thanks @vasco-santos). If you have further complaints unrelated to this issue I'd recommend opening a new issue, this one is already pretty long.

@aschmahmann
Copy link
Contributor

But still: why shouldn't ipfs name publish be instantaneous for the client? It should be up to the daemon to make a background task of the actual publishing work, not make the client wait for it.

This change doesn't seem unreasonable (i.e. having go-ipfs spin up a background thread instead of asking the client to do so, which is also a viable option). However, it's not necessarily the right move as it does end up being a UX change that ends up with people issuing complaints like:

I know you said IPNS name resolution is fast but it takes so long for me. I ran ipfs name publish QmHash and then went to the gateway to fetch ipfs.io/ipns/bafykey and it took a long time.

We end up with the confusion above anyway with IPFS (since if someone is trying to find data published in the DHT for the first time they have to wait for the publish to complete) so perhaps it's worth doing, but it definitely has tradeoffs.

@ec1oud
Copy link

ec1oud commented Dec 13, 2020

I guess I deleted my comment about the same time you were writing a response, because I was thinking the --offline option might be the intended solution: at least it works on the command line. But errm no:

When offline, save the IPNS record to the the local datastore without broadcasting to the network instead of simply failing.

I want to really publish, just don't want to wait for it. Then I wrote ferristseng/rust-ipfs-api#62 ... and then I wondered if the answer to that is to just post the http request to the daemon and hang up, don't wait for a response if you don't want to? But if ipfs daemon will give up on publishing as soon as the client hangs up, it won't work. And if ipfs daemon is running as a system service (which IMO it always should be, on as many systems as possible), it really had better be able to handle many simultaneous requests. So ipns publishing should be pooled: one thread should probably be able to publish many names simultaneously; and if not, then spin up a few more. (I was thinking of using ipns a lot: perhaps every uniquely-addressable largish category of data that I'm storing should have its own name, because there's no other mechanism to hold a handle to mutable data, is there? CIDs change with every data mutation, no matter how deep you bury the mutation in your data structure. E.g. I'm trying to write a database, so I think each database instance needs its own IPNS name, and each insert or update of the database will result in the need to update the IPNS record. Is that not the right way to use it?)

If we make a comparison to DNS publishing: you probably use a web UI at your hosting provider, right? If you want to change the IP address/dnslink/mx record or whatever that your domain points to, everybody knows it can take hours for DNS changes to propagate worldwide, but that does not mean you expect the web UI to keep showing a spinning animation that whole time and refuse to do anything else.

@ec1oud
Copy link

ec1oud commented Dec 15, 2020

The time that name_publish() takes is also unpredictable. With ipfs 0.7.0 (with option --enable-namesys-pubsub) I'm seeing times from around 30 sec to a couple of minutes, when it succeeds. I'm trying to do it once per minute, in a cron job; so if it takes longer than that, I end up with overlapping processes. When it hangs, the process that is trying to do that gets hung indefinitely. (I thought it was perhaps racy too, but that turned out to be my fault.) The ipfs daemon process needs to deal with all possible problems, and return immediately from this API call. But because it's broken, I need to find a way to put a timeout into my application. This really shouldn't be the application's problem.

Perhaps this is one reason orbitdb uses pubsub directly, but not ipns? I don't quite understand how they got around the naming and discovery problem yet. (Yes I read the field manual: https://github.com/orbitdb/field-manual/blob/master/02_Thinking_Peer_to_Peer/01_P2P_vs_Client-Server.md has a clue, that they believe in using one pubsub channel for queries and another for responses, so that queries can be distributed.) But it seems to me that ipns ought to be the ideal way to name a database instance, at least in the one-to-many publishing case like I'm building. It looks like OrbitDB is designed for collaboration, a true distributed system where each peer contributes something; but the one I'm trying to build is simpler, so I don't need the CRDT overhead (I especially don't want to sign each entry separately), but I do want to publish a structured feed, not just individual byte arrays. When there is one authoritative source for a dataset, IPNS makes sense. When there are multiple collaborating peers providing different portions of the data, it could still make sense for each peer to name his own subset of the data with IPNS.

I agree with linas on his points. (Well maybe it's unreasonable to expect ipns to scale to millions of records at this time, but eventually, why not?)

aschmahmann as to your point about needing a sync api in case you turn your phone off or shutdown the ipfs daemon on your end: I don't really agree that it's a good use case for ipfs. A long-running daemon has to be running somewhere. Either you have an arrangement for pinning elsewhere and a way to find out when the pinning is done, or else you keep your own daemon running and available on the network all the time. I continue to think ipfs should be a system daemon, like a web server, not bundled many times into every application, because what if you multi-task? You don't really want multiple application-specific daemons running, do you? And I know that when I publish a video to d.tube for example, I'm not paying them to store it: I'm providing the storage myself. Therefore I keep my ipfs daemon running 24/7 on my home computer(s). With that architecture in mind, I'm writing my database using the http API, as a short-lived process that will talk to my local daemon, ask it to do things on its behalf, and then exit. The ipfs daemon has the responsibility to maintain the integrity and availability of the data. It should never shut down, and it should keep juggling all the tasks at once, that's it's job.

@nischitpra
Copy link

nischitpra commented Sep 21, 2023

This is still a problem after 6years of this thread. Publish mostly takes upwards of 5mins. I was hoping there would be a solution from the comments of @ec1oud . I too want to publish the data as a background job instead of having the client to wait.

I've tried finding if a solution exists for this and cant seem to find it. There are a lot of different posts raising this issue though.

https://discuss.ipfs.tech/t/why-ipns-is-so-slow/2161
https://discuss.ipfs.tech/t/publishing-to-ipns-takes-quite-a-while-normal/8293
https://discuss.ipfs.tech/t/any-way-to-speed-up-ipfs-name-publish/9443/8
#6236

559::
publishIpfsCid::bafkreibbwimjex3a7xtqra2scsvvhji3kq3rtyoh2hxyprhj5yimue2ukm::runtime::398.868s

560::
publishIpfsCid::bafkreig6cb5mu6he53ariwvm4mxe76dor6s6dehd43go4eklnbbzdm6tu4::runtime::238.07s

561::
publishIpfsCid::bafkreibwhhg5tifyrbk7xp4j6w3xm4mq6ktgugxvjh5tmtpqmhntb7idei::runtime::322.029s

562::
publishIpfsCid::bafkreif2dmrejtyicdahm3ygaogpduc4wohczutoemxmobp6de242wd6cy::runtime::230.73s

563::
publishIpfsCid::bafkreiddkvmlor5wx7gy7mindoc3dnckdpq5imvogp7wn6f5t3x5st7i7y::runtime::356.611s

564::
publishIpfsCid::bafkreibxtlfkkdvql3zpksmoacfmpmxkz6vfk6sekurwmfm5xmb5kqof44::runtime::418.249s

565::
publishIpfsCid::bafkreicz6weexgpajot4txhk4zhazsxd4lyoi5c5d47set44fpd24h2m24::runtime::426.391s

566::
publishIpfsCid::bafkreifjzfzcmvi4az7nrtegonaudf6dxywkuswfxjkwr2lqqkktgr5umm::runtime::298.863s

I've also updated the LowHigh/HighWater to low values and set peers from
https://docs.ipfs.tech/how-to/peering-with-content-providers/#content-provider-list

Enabling this Routing.AcceleratedDHTClient does help quite a bit

4::
publishIpfsCid::bafkreigxgo7v32nq36ylzhs3yf4xzczgfajwh2e2xrodpgcx6ehh2dipve::runtime::11.298s

5::
publishIpfsCid::bafkreienln4yj45yiua43zhpo3fhocv2xav2ugad4uq6tjknqg3zhkqd7i::runtime::11.229s

6::
publishIpfsCid::bafkreihd5emslesdskwdsu6k5epujkj7x6nw42ucbzwsgfbvqnuca52oti::runtime::16.692s

7::
publishIpfsCid::bafkreiddvub2cityfiw3vcqnqnuoibry4kdcbbtantwkpwffsy3i7fpbcm::runtime::8.956s

@DougAnderson444
Copy link

Are you seeing the same lag with the pubsub method?

@nischitpra
Copy link

Hey @DougAnderson444 ,
I've not tried pubsub because it says that is depricated https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-pub.
Also I only have 1 node running atm and would like other public nodes to subscribe to my name publish. How would other nodes know what topic to subscribe to? From what I read, I assumed this was meant for private networks or nodes that have decided to partner together?

@DougAnderson444
Copy link

Hey @DougAnderson444 ,
I've not tried pubsub because it says that is depricated https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-pub.
Also I only have 1 node running atm and would like other public nodes to subscribe to my name publish. How would other nodes know what topic to subscribe to? From what I read, I assumed this was meant for private networks or nodes that have decided to partner together?

https://docs.ipfs.tech/reference/kubo/cli/#ipfs-name-pubsub

@nischitpra
Copy link

nischitpra commented Sep 21, 2023

Hey, @DougAnderson444

https://docs.ipfs.tech/reference/kubo/cli/#ipfs-name-pubsub

I've seen this but it doesn't really tell you how to enable it.

Did this https://github.com/ipfs/kubo/blob/master/docs/config.md#ipnsusepubsub and then ipfs name pubsub state which shows enabled.

I'm assuming I don't have to do https://github.com/ipfs/kubo/blob/master/docs/config.md#pubsub since its depricated.

I disabled the AcceleratedDHTClient which I had enabled earlier.

timing my publish doesn't seem to change.

> time ipfs name publish -key=test_key bafkreie36xfirtwos44jyhbvm536lthyfa4msxih3yfryvrhppihwh4oom

Published to k51qzi5uqu5dkgguuzwrp0oqxwhrhysh12cz6jsgoxcaemz0a8jn9wmrn399if: /ipfs/bafkreie36xfirtwos44jyhbvm536lthyfa4msxih3yfryvrhppihwh4oom

real    1m49.594s
user    0m0.029s
sys     0m0.020s

Same thing from the RPC api.

43::
publishIpfsCid::bafkreigktohbexykjcf2lsbo7h475wm7rtip5ar32ttsg3mwa4ebov55ki::runtime::47.935s

44::
publishIpfsCid::bafkreialemnslvkh62ybe5yh2bbgy4loo7k4opserbulaehnlyltmjns5a::runtime::402.891s

45::
publishIpfsCid::bafkreibootth6djmsw2uuhqedkniffh5sowulnyobbcucvzulx5rus2vfi::runtime::418.79s

46::
publishIpfsCid::bafkreicgyor6mjnxsznzjdt2b3myjx25c4wrelzj5qcor7fdf6hiwriedy::runtime::417.335s

47::
publishIpfsCid::bafkreihremhfczin5z3dupz4fm2wdydazh6f26argfbwawhv6b64hthod4::runtime::411.85s

48::
publishIpfsCid::bafkreic4y36ssyezq6427iphyyc7cdsrg7yq5px7hukeojserccvspkbwm::runtime::416.853s

49::
publishIpfsCid::bafkreihremhfczin5z3dupz4fm2wdydazh6f26argfbwawhv6b64hthod4::runtime::441.436s

50::
publishIpfsCid::bafkreidsxydpmnqaw253wsphdet2miprznq637hocwcoox6hsdwhdwir5u::runtime::420.124s

51::
publishIpfsCid::bafkreie43ylchnzpwq24edqvpzdesdy7sr6ednou5wfgj237jnppwzqmey::runtime::419.419s

52::
publishIpfsCid::bafkreibx4dkvx4jibgbgcsiloawwtrf7crtvtp3zs57uj2widxtjumryk4::runtime::421.245s

53::
publishIpfsCid::bafkreieciiwswmcdhqlgq26kdjaz5eb6jq7jheon66e2ofrcd3cpzd2wee::runtime::143.952s

54::
publishIpfsCid::bafkreibq25seisag3ysanktnpt4bvlqlthk5zorpwybxm73wcl4olmw6eq::runtime::119.142s

55::
publishIpfsCid::bafkreidhjwvpe3xrs3t5uidru3ba2usnctbs2nw7al3fw3yisiyzkjgyym::runtime::150.344s

56::
publishIpfsCid::bafkreihjz7s3l5mxfnkt2pzujiixpw6b2w7ftsr4fdrvykm4g3cqkf6m7u::runtime::122.011s

ipfs name pubsub doesn't seem to have a way to check my pubsub subscribers but using the depricated ipns pubsub peers shows I have about 66 peers.
Also not sure if the name publish happened from pubsub or not. Cloudflare did resolve my IPNS so the name is published but I'm not sure if it was from the pubsub or was it just the normal publish as before since the runtime to actually publish is pretty high.

@master255
Copy link

Maybe it would be better to use PutValue, GetValue? I checked. It works fast.

@nischitpra
Copy link

nischitpra commented Sep 29, 2023

Could you share the doc link for this? cant find it here https://docs.ipfs.tech/reference/kubo/rpc/

@master255
Copy link

I'm talking about dht putvalue and getvalue. Does Kubo have a direct api to dht?

@nischitpra
Copy link

Seems to be depricated https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-dht-put

@master255
Copy link

And what's better than that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need/analysis Needs further analysis before proceeding topic/ipns Topic ipns topic/perf Performance
Projects
No open projects
Development

No branches or pull requests