Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (93)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (15817)

  • How do I save `RTSP` streams as `*.mp4` files in Golang ? [closed]

    14 septembre 2021, par Clancy Zeng

    I need to use Golang to save the data from the RTSP stream to a *.mp4 video file for a specified length of time.

    


    I wanted to install an FFmpeg application and then call a command like this from Golang to solve my needs.

    


    


    ffmpeg -i rtsp ://192.168.108.132:8554/profile0 -r 16 -tt 60 /tmp/test.mp4

    


    


    But installing FFMPEG takes up hundreds of megabytes of space.So this idea was rejected by the leadership.

    


    Is there a pure Golang implementation library that can address my needs ?Or is there a golang-only library that parses RTSP streams and saves them as *.MP4 video files ?

    


  • Trying to save frames as colored image using Ffmpeg in C++

    2 septembre 2021, par Tolga

    I am new to FFmpeg and I am trying to save the video frames as colored images. I have achieved saving them as grayscale using Netpbm, however, I need to save the frames as colored. I have tried implementing the code in this link.

    


    However, I get an error :

    


    'Exception thrown at 0x00E1FC4F (swscale-5.dll) in VideoDecoding2.exe:
 0xC0000005: Access violation writing location 0xCCCCCCCC.'


    


    Is there any way to improve this code or another way to save frames as colored ?

    


    Here is my code below.

    


      

    • src_pix_fmt is AV_PIX_FMT_YUV420p.
    • 


    • dst_pix_fmt is AV_PIX_FMT_RGB24.
    • 


    


    src_pix_fmt = avcc->pix_fmt;

src_width = avcc->width;
src_height = avcc->height;

dst_width = src_width;
dst_height = src_height;

numBytes = av_image_get_buffer_size(dst_pix_fmt, dst_width, dst_height, 0);

buffer = (uint8_t*)av_malloc(numBytes);

if ((ret = av_image_alloc(src_data, src_linesize, src_width, src_height, src_pix_fmt, 16)) < 0)
{
    printf("Couldn't allocate source image.\n");
    return 0;
}

av_image_fill_arrays(frameRGB->data, frameRGB->linesize, buffer, dst_pix_fmt, dst_width, dst_height, 0);

while (av_read_frame(avfc, packet) >= 0)
{
    ret = avcodec_send_packet(avcc, packet);
    if (ret < 0)
    {
        printf("Packets could not supplied to decoder.\n");
        return -1;
    }

    ret = avcodec_receive_frame(avcc, frame);
    printf("%d", ret);

    if (packet->stream_index == videoStream)
    {
        sws_ctx = sws_getContext(src_width, src_height, src_pix_fmt,
            dst_width, dst_height, dst_pix_fmt,
            SWS_BILINEAR, NULL, NULL, NULL);

        if (!sws_ctx)
        {
            printf("Cannot create scale context for conversion\n"
                "fmt:%s s:%dx%d --> fmt:%s s:%dx%d\n",
                av_get_pix_fmt_name(src_pix_fmt), src_width, src_height,
                av_get_pix_fmt_name(dst_pix_fmt), dst_width, dst_height);
            return 0;
        }

        sws_scale(sws_ctx, (const uint8_t* const*)frame->data, frame->linesize, 0, frame->height, dst_data, dst_linesize);

        FILE* f;
        char szFilename[32];
        int y;

        snprintf(szFilename, sizeof(szFilename), "frame%d.ppm", avcc->frame_number);
        fopen_s(&f, szFilename, "wb");
        
        if (f == NULL)
        {
            printf("Couldn't open file.\n");
            return 0;
        }
        
        fprintf(f, "P6\n%d %d\n255\n", dst_width, dst_height);

        for (y = 0; y < dst_height; y++)
            fwrite(frameRGB->data[0] + y * frameRGB->linesize[0], 1, dst_width * 3, f);
        
        fclose(f);
    }
}


    


  • FFmpeg av_read_frame returns packets from audio stream

    1er août 2018, par yultan

    I am currently trying to learn the FFmpeg API, following this tutorial. However, I already have issues with the first lesson on video decoding. My code is basically the same as the one from the tutorial except I am using C++. My issue is that the video stream does not match the one from the packet returned by av_read_frame.

    The video stream is obtained looping on the available streams until the video stream is found.

    for(int i = 0; i < pFormatCtx->nb_streams; i++) { // nb_streams == 2

       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
           videoStream = i;
           break; // videoStream == 0
       }
    }

    Then when retrieving the frame data, it seams grabbing the audio channel.

    while(av_read_frame(pFormatCtx, &packet) >= 0) { // read returns 0

       // Is this a packet from the video stream?
       if(packet.stream_index == videoStream) {
           //packet.stream_index == 1, which correspond to the audio stream
       }
    }

    I have not found examples online where this test is actually failing. Have I miss some way to specify the stream_index that is not in the tutorial ? Maybe the tutorial is not up to date and is doing something wrong ? If so, what is the correct way to extract the frame data ? In case that matters, I am using the latest FFmpeg 4.0.2 build, on Windows 64-bits, compiling with Visual Studio 2017.

    On videos with no sound, the two streams match and I am able to decode and display the frames correctly.