- memory statistics (in json, see "memstats" object)
-`curl localhost:5001/debug/vars > ipfs.vars`
- system information
-`ipfs diag sys > ipfs.sysinfo`
...
...
@@ -28,7 +31,7 @@ Bundle all that up and include a copy of the ipfs binary that you are running
You can investigate yourself if you feel intrepid:
### Analysing the stack dump
### Analyzing the stack dump
The first thing to look for is hung goroutines -- any goroutine thats been stuck
for over a minute will note that in the trace. It looks something like:
...
...
@@ -84,6 +87,10 @@ about `go tool pprof`. My go-to method of analyzing these is to run the `web`
command, which generates an SVG dotgraph and opens it in your browser. This is
the quickest way to easily point out where the hot spots in the code are.
### Analyzing vars and memory statistics
The output is JSON formatted and includes badger store statistics, the command line run, and the output from Go's [runtime.ReadMemStats](https://golang.org/pkg/runtime/#ReadMemStats). The [MemStats](https://golang.org/pkg/runtime/#MemStats) has useful information about memory allocation and garbage collection.
### Other
If you have any questions, or want us to analyze some weird go-ipfs behaviour,