gogfapi icon indicating copy to clipboard operation
gogfapi copied to clipboard

Memory leak creating gfapi.Volumes

Open chrisbecke opened this issue 3 years ago • 0 comments

Issue

In a long running service I create multiple gfapi.Volume objects to ensure the service does not stop functioning if a singleton instance disconnects. This causes a problem as the service sonsumes memory over time and is eventually OOM killed.

To Replicate the issue

1. The following files are created in a directory:

Dockerfile

FROM gluster/glusterfs-client AS builder

RUN yum -y update
RUN yum install -y glusterfs-api glusterfs-api-devel glusterfs-fuse gcc
RUN yum -y install valgrind
RUN curl -k https://dl.google.com/go/go1.15.3.linux-amd64.tar.gz | tar xz -C /usr/local

ENV PATH=/usr/local/go/bin:$PATH

docker-compose.yml

version: "3.8"

volumes:
  save1:
  data1:
  lvm1:

services:
  builder:
    image: chrisb/builder
    build:
      context: .
      target: builder
    volumes:
    - ./:/src
    working_dir: /src

  glusterfs:
    image: gluster/gluster-centos
    hostname: glusterfs
    privileged: true
    environment:
      CGROUP_PIDS_MAX: 0
    volumes:
      - lvm1:/run/lvm
      - save1:/var/lib/glusterd
      - data1:/data

main.go

package main

import (
	"flag"
	"fmt"
	"log"
	"strings"

	"github.com/gluster/gogfapi/gfapi"
)

var loop = flag.Int("n", 10, "iterations")

const gfs_volume = "vol1"
var gfs_hosts = strings.Split("glusterfs", ` `)

func testMount() {
	vol := &gfapi.Volume{}

	if err := vol.Init(gfs_volume, gfs_hosts...); err != nil {
		log.Printf("failed %v\n", err)
	}

	//	if err := vol.Mount(); err != nil {
	//		log.Fatalf("failed %v", err)
	//	}
	//	defer vol.Unmount()
}

func main() {
	flag.Parse()

	for i := 0; i < *loop; i++ {
		testMount()
		fmt.Printf(".")
	}
	fmt.Printf("\ndone\n")
}

2. Start the environment

In a terminal in the folder run the following to start the glusterfs server and a build environment

docker-compose up -d glusterfs
docker-compose exec glusterfs gluster volume create vol1 glusterfs:/data/vol1
docker-compose exec glusterfs gluster volume start vol1
docker-compose build builder
docker-compose run builder

3. Run valgrind

In the builder terminal run the following commands to build the binary file and generate reports using valgrind memcheck module:

go build -o test
valgrind --log-file=log50.txt --leak-check=full ./test -n 50
valgrind --log-file=log100.txt --leak-check=full ./test -n 100

Observed Behaviour

There is a linear growth in memory leaks reported as the number of iterations is increased.

chrisbecke avatar Dec 04 '20 06:12 chrisbecke