Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (54)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6709)

  • An efficient way to use Windows Named Pipe for IPC

    26 juillet 2020, par Ehsan5

    I am using jna module to connect two processes that both perform FFMPEG commands. Send SDTOUT of FFMPEG command on the server side to NampedPipe and receive STDIN from that NampedPipe for other FFMPEG command on the Client side.

    


    this is how I capture STDOUT and send into the pipe in server Side :

    


    InputStream out = inputProcess.getInputStream();
byte[] buffer = new byte[maxBufferSize];
while (inputProcess.isAlive()) {
     int no = out.available();
     if (no > 0 && no > maxBufferSize) {
        int n = out.read(buffer, 0,maxBufferSize);
        IntByReference lpNumberOfBytesWritten = new IntByReference(maxBufferSize);
        Kernel32.INSTANCE.WriteFile(pipe, buffer, buffer.length, lpNumberOfBytesWritten, null);
     }
}


    


    And this is how I capture STDIN and feed it to the Client Side :

    


    OutputStream in = outputProcess.getOutputStream();
while (pipeOpenValue >= 1 && outputProcess.isAlive() && ServerIdState) {
      // read from pipe
      resp = Kernel32.INSTANCE.ReadFile(handle, readBuffer,readBuffer.length, lpNumberOfBytesRead, null);
      // Write to StdIn inputProcess
      if (outputProcess != null) {
          in.write(readBuffer);
          in.flush();
      }
      // check pipe status
      Kernel32.INSTANCE.GetNamedPipeHandleState(handle, null,PipeOpenStatus, null, null, null, 2048);
      pipeOpenValue = PipeOpenStatus.getValue();
      WinDef.ULONGByReference ServerId = new WinDef.ULONGByReference();
      ServerIdState = Kernel32.INSTANCE.GetNamedPipeServerProcessId(handle, ServerId);
}


    


    But I faced two problems :

    


      

    1. High CPU usage due to iterating two loops in Server and Client. (find by profiling resources by VisualVM)
    2. 


    3. Slower operation than just connecting two FFMPEG command with regular | in command prompt. Speed depends on buffer size but large buffer size blocks operation and small buffer size reduce speed further.
    4. 


    


    Questions :

    


      

    1. Is there any way not to send and receive in chunks of bytes ? Just stream STDOUT to the Namedpipe and capture it in Client. (Eliminate two Loops)
    2. 


    3. If I cant use NampedPipe, is there any other way to Connect two FFMPEG process that runs in different java modules but in the same machine ?
    4. 


    


    Thanks

    


  • ffmpeg streaming of audio and video using rtmp

    30 juillet 2020, par weicheng.yu

    开发包

    


    I want to stream some videos (a dynamic playlist managed by a python script) to a RTMP server, and i'm currently doing something quite simple : streaming my videos one by one with FFMPEG to the RTMP server, however this causes a connection break every time a video end, and the stream is ready to go when the next video begins.

    


    I would like to stream those videos without any connection breaks continuously, then the stream could be correctly viewed.

    


    I use this command to stream my videos one by one to the server

    


     while (CanRun)
        {
            try
            {
                do
                {
                    // 读取一帧未解码数据
                    error = ffmpeg.av_read_frame(pFormatContext, pPacket);
                    // Console.WriteLine(pPacket->dts);
                    if (error == ffmpeg.AVERROR_EOF) break;
                    if (error < 0) throw new ApplicationException(GetErrorMessage(error));

                    if (pPacket->stream_index == pStream->index) { }
                    else if (pPacket->stream_index == aStream->index)
                    {
                        AVPacket* aVPacket = ffmpeg.av_packet_clone(pPacket);
                        if (Aqueue.Count > 49) Aqueue.Dequeue();
                        Aqueue.Enqueue(*aVPacket);

                        ++AframeNumber;
                        continue;
                    }
                    else
                    {
                        ffmpeg.av_packet_unref(pPacket);//释放数据包对象引用
                        continue;
                    }

                    // 解码
                    error = ffmpeg.avcodec_send_packet(pCodecContext, pPacket);
                    if (error < 0) throw new ApplicationException(GetErrorMessage(error));
                    // 解码输出解码数据
                    error = ffmpeg.avcodec_receive_frame(pCodecContext, pDecodedFrame);
                } while (error == ffmpeg.AVERROR(ffmpeg.EAGAIN) && CanRun);
                if (error == ffmpeg.AVERROR_EOF) break;
                if (error < 0) throw new ApplicationException(GetErrorMessage(error));
                if (pPacket->stream_index != pStream->index) continue;

                AVFrame* aVFrame = ffmpeg.av_frame_clone(pDecodedFrame);
                if (Vqueue.Count > 49) Vqueue.Dequeue();
                Vqueue.Enqueue(*aVFrame);
            }
            finally
            {
                ffmpeg.av_packet_unref(pPacket);//释放数据包对象引用
                ffmpeg.av_frame_unref(pDecodedFrame);//释放解码帧对象引用
            }

            VframeNumber++;
            FFmpeg_Manager.ShowMessage = string.Format(ProgramInfo, VframeNumber, AframeNumber, exhibitionNum, effectiveNum);
        }


    


  • Ffmpeg H.264 encode video is sped up if camera capture with low light

    10 août 2020, par Expressingx

    I'm encoding everything to H.264. If h264_qsv is available I'm using it, else libx264. Works fine, but I noticed that if the camera is recording in low light, the video saved is sped up like x2 or x3. And I'm not sure where the problem is. Creating the input format context :

    


        private AVFormatContext* CreateFormatContext()
    {
        AVDictionary* options = null;

        ffmpeg.av_dict_set(&options, "packet-buffering", "0", 0);
        ffmpeg.av_dict_set(&options, "sync", "1", 0);
        ffmpeg.av_dict_set(&options, "rtsp_transport", "tcp", 0);
        ffmpeg.av_dict_set(&options, "reconnect", "1", 0);
        ffmpeg.av_dict_set(&options, "analyzeduration", "2000000", 0);
        ffmpeg.av_dict_set(&options, "probesize", (16384 * 16).ToString(), 0);
        ffmpeg.av_dict_set(&options, "max_delay", "0", 0);
        ffmpeg.av_dict_set(&options, "reorder_queue_size", "0", 0);
        ffmpeg.av_dict_set(&options, "skip_frame", "8", 0);
        ffmpeg.av_dict_set(&options, "skip_loop_filter", "48", 0);
        ffmpeg.av_dict_set(&options, "rtbufsize", "1000M", 0);

        AVFormatContext* pInputFmtCtx = ffmpeg.avformat_alloc_context();

        AVInputFormat* inputFormat = null;

        if (!string.IsNullOrEmpty(_format))
        {
            inputFormat = ffmpeg.av_find_input_format(_format);

            if (inputFormat == null)
            {
                //throw
            }
        }

        int ret = ffmpeg.avformat_open_input(&pInputFmtCtx, _streamUrl, inputFormat, &options);

        if (ret != 0)
        {
            //throw
        }

        return pInputFmtCtx;
    }


    


    video decoder

    


        private void CreateVideoDecoder()
    {
        AVStream* videoStream = InputFormatContext->streams[VideoStreamIndex];
        AVCodecParameters* videoCodecParams = videoStream->codecpar;
        AVCodec* videoDecoder = ffmpeg.avcodec_find_decoder(videoCodecParams->codec_id);

        VideoDecodeContext = ffmpeg.avcodec_alloc_context3(videoDecoder);

        if (ffmpeg.avcodec_parameters_to_context(VideoDecodeContext, videoCodecParams) < 0)
        {
            //throw
        }

        if (ffmpeg.avcodec_open2(VideoDecodeContext, videoDecoder, null) < 0)
        {
            //throw
        }
    }


    


    and the h264 encoder

    


    private void CreateH264Encoder(AVStream* inputStream, AVStream* outputStream)
    {
        AVRational framerate = ffmpeg.av_guess_frame_rate(_inputContext.InputFormatContext, inputStream, null);

        AVCodec* videoEncoder = ffmpeg.avcodec_find_encoder_by_name("h264_qsv");
        if (videoEncoder == null)
        {
            videoEncoder = ffmpeg.avcodec_find_encoder_by_name("libx264");
            PixelFormat = AVPixelFormat.AV_PIX_FMT_YUV420P;
        }

        if (videoEncoder == null)
        {
            //throw
        }

        VideoEncodeContext = ffmpeg.avcodec_alloc_context3(videoEncoder);

        if (VideoEncodeContext == null)
        {
            //throw
        }

        VideoEncodeContext->width = _inputContext.VideoDecodeContext->width;
        VideoEncodeContext->height = _inputContext.VideoDecodeContext->height;
        VideoEncodeContext->pix_fmt = PixelFormat;
        VideoEncodeContext->bit_rate = 2 * 1000 * 1000;
        VideoEncodeContext->rc_buffer_size = 4 * 1000 * 1000;
        VideoEncodeContext->rc_max_rate = 2 * 1000 * 1000;
        VideoEncodeContext->rc_min_rate = 3 * 1000 * 1000;
        VideoEncodeContext->framerate = framerate;
        VideoEncodeContext->max_b_frames = 0;
        VideoEncodeContext->time_base = ffmpeg.av_inv_q(framerate);
        VideoEncodeContext->flags |= ffmpeg.AV_CODEC_FLAG_GLOBAL_HEADER;

        ffmpeg.av_opt_set(VideoEncodeContext->priv_data, "preset", "slow", 0);
        ffmpeg.av_opt_set(VideoEncodeContext->priv_data, "vprofile", "baseline", 0);

        if (ffmpeg.avcodec_open2(VideoEncodeContext, videoEncoder, null) < 0)
        {
            //throw
        }

        ffmpeg.avcodec_parameters_from_context(outputStream->codecpar, VideoEncodeContext);
    }


    


    I'm using ffmpeg 4.0.1, so I'm decoding/encoding with the new format API which I'll skip to share for now because its nothing more than following the link : https://ffmpeg.org/doxygen/3.3/group__lavc__encdec.html