Recherche avancée

Médias (1)

Mot : - Tags -/pirate bay

Autres articles (36)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (4936)

  • How to convert a Stream on the fly with FFMpegCore ?

    18 octobre 2023, par Adrian

    For a school project, I need to stream videos that I get from torrents while they are downloading on the server.
When the video is a .mp4 file, there's no problem, but I must also be able to stream .mkv files, and for that I need to convert them into .mp4 before sending them to the client, and I can't find a way to convert my Stream that I get from MonoTorrents with FFMpegCore into a Stream that I can send to my client.

    


    Here is the code I wrote to simply download and stream my torrent :

    


    var cEngine = new ClientEngine();

var manager = await cEngine.AddStreamingAsync(GenerateMagnet(torrent), ) ?? throw new Exception("An error occurred while creating the torrent manager");

await manager.StartAsync();
await manager.WaitForMetadataAsync();

var videoFile = manager.Files.OrderByDescending(f => f.Length).FirstOrDefault();
if (videoFile == null)
    return Results.NotFound();

var stream = await manager.StreamProvider!.CreateStreamAsync(videoFile, true);
return Results.File(stream, contentType: "video/mp4", fileDownloadName: manager.Name, enableRangeProcessing: true);


    


    I saw that the most common way to convert videos is by using ffmpeg. .NET has a package called FFMpefCore that is a wrapper for ffmpeg.

    


    To my previous code, I would add right before the return :

    


    if (!videoFile.Path.EndsWith(".mp4"))
{
    var outputStream = new MemoryStream();
    FFMpegArguments
        .FromPipeInput(new StreamPipeSource(stream), options =>
        {
            options.ForceFormat("mp4");
        })
        .OutputToPipe(new StreamPipeSink(outputStream))
        .ProcessAsynchronously();
    return Results.File(outputStream, contentType: "video/mp4", fileDownloadName: manager.Name, enableRangeProcessing: true);
}


    


    I unfortunately can't get a "live" Stream to send to my client.

    


  • How to interpret ffmpeg recording options available for a webcam (directshow) ?

    5 janvier 2023, par Jones659

    I am trying to create a GUI for personal use, that allows someone to customise recording and converting options of ffmpeg, without directly using the command line. At the moment, I am learning about different parameters and flags in ffmpeg.

    


    Apologies in advance if I end up asking some stupid questions, I am on a learning journey at the moment, unfortunately not all of this info is available online in an easily understandable way.

    


    I have a USB webcam which reported having the following options available to it :

    


    [dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=640x480 fps=5 max s=640x480 fps=30
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=640x480 fps=5 max s=640x480 fps=30 (tv, bt470bg/bt709/unknown, topleft) chroma_location=topleft
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=352x288 fps=5 max s=352x288 fps=30
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=352x288 fps=5 max s=352x288 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=320x240 fps=5 max s=320x240 fps=30
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=320x240 fps=5 max s=320x240 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=176x144 fps=5 max s=176x144 fps=30
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=176x144 fps=5 max s=176x144 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=160x120 fps=5 max s=160x120 fps=30
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=160x120 fps=5 max s=160x120 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=1280x1024 fps=5 max s=1280x1024 fps=9
[dshow @ 00000000003f9340]   pixel_format=yuyv422  min s=1280x1024 fps=5 max s=1280x1024 fps=9 (tv, bt470bg/bt709/unknown, topleft)


    


    I just want to get to the bottom of how I should interpret this, apologies that I will ask multiple questions :

    


      

    1. The fact that both resolution and fps have a min and max value (for every option) seems to imply that these two parameters are supposably uncontrollably variable, right ? In practice, the fps has been variable depending on brightness, however the resolution has not been - is it safe to assume that video imaging devices (especially such as a webcam) do not have variable resolution ?

      


    2. 


    3. Secondly, why is it that every option is listed twice, except half of them specify extra info, such as color_range, color_space, and chroma_location ? Is this just a quirk ? Surely those extra parameter options should not be discarded ?

      


    4. 


    5. It's hard to know how to make sense of this, but or example : the fact that only "tv" is ever shown, does that impliy that the webcam can only ever do limited color range, and there is no point trying to get full 0,255 out of it ? I read somewhere that "pc" implies full range of 0-255, whereas "tv" implies a range of 16-235

      


    6. 


    7. With regards to color space, is it acceptable to record the webcam as raw (un-encoded), and then later convert to a different color space later down the line ? Which approach to dealing with the color-space yields the least amount of lost color ? My only previous experience with color spaces is in the realm of images - where for example, it makes no sense to convert sRGB to ROMM16 RGB, because you're going to a color space which has wider coverage, and extra colors won't be created out of thin air, you'd want to go once from raw to a color space, and avoid converting between color spaces afterwards. Also, what does "unknown" mean in the color space options ?

      


    8. 


    


    Here's the culmination of some research/testing i've done, is there anything correct, or seriously wrong, in the conclusions and assumptions I've made below ?

    


    My understanding of pixel_format is as follows : when you're recording, (even to raw), you specify the pixel format using something like "-pixel_format yuyv422", this is a "packed", not "planar" format, which is produced by the webcam. When you convert from raw to something like mkv using libx264, you can't specify a "packed" pixel format such as "yuyv422", but must instead use an appropriate planar counterpart, such as "yuv422p", which would be specified using "-pix_fmt yuv422p".

    


    I did a raw recording of the webcam (in which I recorded a bright light, in the dark), I didn't set any of the options in the brackets above. I then converted this video using libx264 with the flags "-dst_range 1 -color_range 2" which I saw elsewhere on the internet.

    


    Taking a screenshot of this video using vlc, and putting it through imagemagick identify -verbose, shows that the color range of the screenshot is 0,255, as for the video itself, "MediaInfo" reports "color range:Full", VLC's codec info says "Decoded format : Planar 4:2:2 YUV full scale - is this info worth anything, or is it just meta-data that the video got tagged with ?

    


    At first I was happy about imagemagick's color range reporting, but I am thinking now, the 0, 255 range could be a result of "overshoot" values produced by the camera, which aren't actually supposed to be mapped linearly.

    


    I appreciate that this probably feels like some school-kiddy offloading their homework assignment to avoid doing work, but I hope it can be seen that I've looked into these things prior to putting this post together.

    


    Thanks in advance, if anyone takes the time to answer anything.

    


  • FFmpeg : Encoder did not produce proper pts, making some up

    22 novembre 2022, par Chroluma

    I'm trying to convert a yuv image to jpg format via FFmpeg. But I occured [image2 @ 0x38750] Encoder did not produce proper pts, making some up. while the program was encoding. I looked up some references that someone said avcodec_send_frame can only be used when the frames is more than one. Can I use this way to achieve image conversion ? Here is my code :

    


    int ff_yuv422P_to_jpeg(int imgWidth, int imgHeight, uint8_t* yuvData, int yuvLength)
{
    /* ===== define ===== */
    const char* OutputFileName = "img.jpg";
    int retval = 0;

    /* ===== context ===== */
    struct AVFormatContext* pFormatCtx = avformat_alloc_context();
    avformat_alloc_output_context2(&pFormatCtx, NULL, NULL, OutputFileName);
    struct AVOutputFormat* fmt = pFormatCtx->oformat;

    struct AVStream* video_st = avformat_new_stream(pFormatCtx, 0);
    if (!video_st)
    {
        retval = 1;
        perror("ff_yuv422_to_jpeg(): avformat_new_stream");
        goto out_close_ctx;
    }

    /* ===== codec ===== */
    struct AVCodecContext* pCodecCtx = avcodec_alloc_context3(NULL);
    if (avcodec_parameters_to_context(pCodecCtx, video_st->codecpar) < 0)
    {
        retval = 2;
        perror("ff_yuv422_to_jpeg(): avcodec_parameters_to_context");
        goto out_close_ctx;
    }

    pCodecCtx->codec_id = fmt->video_codec;
    pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
    pCodecCtx->pix_fmt = AV_PIX_FMT_YUVJ422P;
    pCodecCtx->width = imgWidth;
    pCodecCtx->height = imgHeight;
    pCodecCtx->time_base.num = 1;
    pCodecCtx->time_base.den = 25;

    //dump info
    av_dump_format(pFormatCtx, 0, OutputFileName, 1);

    struct AVCodec *pCodec = avcodec_find_encoder(pCodecCtx->codec_id);
    if (!pCodec)
    {
        retval = 3;
        perror("ff_yuv422_to_jpeg(): avcodec_find_encoder");
        goto out_close_st;
    }

    if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
    {
        retval = 4;
        perror("ff_yuv422_to_jpeg(): avcodec_open2");
        goto out_close_st;
    }

    /* ===== frame ===== */
    struct AVFrame* pictureFrame = av_frame_alloc();
    pictureFrame->width = pCodecCtx->width;
    pictureFrame->height = pCodecCtx->height;
    pictureFrame->format = AV_PIX_FMT_YUVJ422P;

    int picSize = av_image_get_buffer_size(AV_PIX_FMT_YUVJ422P, pCodecCtx->width, pCodecCtx->height, 1);
    uint8_t* pictureBuffer = (uint8_t*)av_malloc(picSize);
    av_image_fill_arrays(pictureFrame->data, pictureFrame->linesize, pictureBuffer, AV_PIX_FMT_YUVJ422P, pCodecCtx->width, pCodecCtx->height, 1);

    /* ===== write header ===== */
    int notUseRetVal = avformat_write_header(pFormatCtx, NULL);
    
    struct AVPacket* pkt = av_packet_alloc();
    av_new_packet(pkt, imgHeight * imgWidth * 3);
    pictureFrame->data[0] = pictureBuffer + 0 * (yuvLength / 4);
    pictureFrame->data[1] = pictureBuffer + 2 * (yuvLength / 4);
    pictureFrame->data[2] = pictureBuffer + 3 * (yuvLength / 4);

    /* ===== encode ===== */
    int ret = avcodec_send_frame(pCodecCtx, pictureFrame);
    while (ret >= 0)
    {
        pkt->stream_index = video_st->index;
        ret = avcodec_receive_packet(pCodecCtx, pkt);
        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
        {
            retval = 5;
            perror("ff_yuv422_to_jpeg(): avcodec_receive_packet");
            goto out_close_picture;
        }
        else if (ret < 0)
        {
            retval = 6;
            perror("ff_yuv422_to_jpeg(): avcodec_receive_packet ret < 0");
            goto out_close_picture;
        }
        av_write_frame(pFormatCtx, pkt);
    }
    av_packet_unref(pkt);

    /* ===== write trailer ===== */
    av_write_trailer(pFormatCtx);

#if Print_Debug_Info
    printf("yuv2jpg Encode Success.\n");
#endif

out_close_picture:
    if (pictureFrame) av_free(pictureFrame);
    if (pictureBuffer) av_free(pictureBuffer);

out_close_st:
    // old school
    // if (video_st) avcodec_close(video_st->codec);

out_close_ctx:
    if (pFormatCtx) avformat_free_context(pFormatCtx);

out_return:
    return retval;
}


    


    and my log :

    


    Output #0, image2, to 'img.jpg':
    Stream #0:0: Unknown: none
[image2 @ 0x38750] Encoder did not produce proper pts, making some up.
ff_yuv422_to_jpeg(): avcodec_receive_packet: Success


    


    I looked up the avcodec_receive_packet()'s reference, and my code return error code AVERROR(EAGAIN).