Performance improvement is life-long work. First of all, you need to know how it does right now. The way to do it usually depends on the scale.
Developed a distributed performance test system before with Python, which took quite a long time to evolve and it's complicated. I wanted to try something different in this brand new project. Before introducing all kinds of performance test framework, I would like to make a simple and handy application.
There we Go.
Keep in mind: simple and handy.
Then it shall be able to
- Send requests to API server
- Evaluate the time cost
- Generate some reports
- Cookie support
- Configure
The Evaluator
type Evaluator struct {
cli *http.Client
report []Item
cookie *Cookie
}
Item
to keep the request results
type Item struct {
Url *url.URL `json:"url,omitempty"`
Elapsed time.Duration `json:"elapsed,omitempty"`
CreatedAt time.Time `json:"created_at,omitempty"`
}
Cookie
to keep session-related data.
type Cookie struct {
data map[string]interface{}
}
Get some data
func (self *Evaluator) Elapsed(req *http.Request) (*http.Response, error) {
before := time.Now()
rsp, err := self.cli.Do(req)
self.report = append(self.report, Item{
Url: req.URL,
Elapsed: time.Now().Sub(before),
CreatedAt: before,
})
return rsp, err
}
Time to print some results.
func (self *Evaluator) GenReport() error {
fmt.Println()
fmt.Println("-- report --")
for _, v := range self.report {
fmt.Println(v.String())
}
return nil
}
To get a pretty print, create a String
function for Item
as well.
func (self Item) String() string {
return fmt.Sprintf("%v [%s:%v] %s", self.CreatedAt.Unix(), self.Url.Path, self.Url.RawQuery, self.Elapsed.String())
}
Maybe generate some CSV for further processing. I was using pretty print in Go, good to read but difficult to process later.
func (self *Evaluator) GenReportCSV(path string) error {
// ...
w := csv.NewWriter(fd)
w.Write([]string{"Timestamp", "Api", "Query", "Elapsed"})
for _, v := range self.report {
if err = w.Write([]string{
fmt.Sprintf("%d", v.CreatedAt.Unix()),
v.Url.Path,
fmt.Sprintf("%s", v.Url.RawQuery),
fmt.Sprintf("%.4f", v.Elapsed.Seconds())});
err != nil {
return err
}
}
w.Flush()
return nil
}
Create a dummy client to do something crazy, wg
is a WaitGroup
variable to wait for all crazy clients to finish their operations.
func dummy_terminal() {
app := apps.NewApp()
defer app.Close()
// ... do something crazy to API ...
if err := app.GenReport(); err != nil {
fmt.Println("generate report failed: ", err)
}
out := fmt.Sprintf("%s/performance-%d.csv", *output_base, time.Now().UnixNano())
if err := app.GenReportCSV(out); err != nil {
fmt.Println("generate report csv failed: ", err)
}
wg.Done()
}
Configure in command line with flag
var(
terminals = flag.Int("terminals", 30, "total concurrent terminals")
output_base = flag.String("output_base", "/tmp", "backend output_base")
)
Found a great logging module in Fabric :D
go get github.com/hyperledger/fabric/common/flogging
The CSV output looks like
Timestamp,Api,Query,Elapsed
1562142836,/api/home,,2.0599
1562142842,/api/list,,0.0311
1562142842,/api/somepage,id=99,0.0323
1562142842,/api/somepage,id=66,1.6131
1562142845,/api/somepage,id=33,0.0432
The console output is pretty much the same, except the time is formatted.
Top comments (0)