Recherche avancée

Médias (0)

Mot : - Tags -/logo

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (69)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9917)

  • Ignore streams when finding stream info

    17 mars 2018, par CSNewman

    I’m trying to speed up the start of ffmpeg when processing my live stream, and have narrowed down the issue to the ‘avformat_find_stream_info’ function. The source I’m trying to process seems to have a number of streams that ffmpeg is unable to determine what they are and therefore spends a while trying to find information about them (The entire analyse window). I know ahead of time that I only want to find information about stream #0:0 and #0:1, and I will be disregarding the other streams anyways.

    I managed to work around this using the API by setting the number of streams before the find information call and then restoring the value afterwards.

    inputContext->nb_streams = (uint) 2;
    if (ffmpeg.avformat_find_stream_info(inputContext, null) < 0)
    {
       throw new InvalidOperationException("Could not read stream information.");
    }
    inputContext->nb_streams = oldSize;

    However, I would prefer to use the CLI interface of ffmpeg.

    My current command

    ffmpeg -find_stream_info false -i http://192.168.1.112:5004/auto/v1 -c copy -map 0:v:0 -map 0:a:0 -ignore_unknown   -f hls -hls_flags delete_segments  -segment_list playlist.m3u8 -segment_list_type hls -segment_list_size 10 -segment_list_flags +live -segment_time 10 -f segment stream%%05d.ts

    Giving the following input (Unneeded logging removed)

    [mpeg2video @ 000001ce79423440] Invalid frame dimensions 0x0.
       Last message repeated 9 times
    [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 5 (Unknown: none ([11][0][0][0] / 0x000B)): unknown codec
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 6 (Unknown: none ([11][0][0][0] / 0x000B)): unknown codec
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 7
    (Unknown: none ([5][0][0][0] / 0x0005)): unknown codec
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    [mpegts @ 000001ce7941b740] Could not find codec parameters for stream 8
    (Unknown: none ([5][0][0][0] / 0x0005)): unknown codec
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    Input #0, mpegts, from 'http://192.168.1.112:5004/auto/v1':
     Duration: N/A, start: 91296.182311, bitrate: N/A
     Program 4165
       Stream #0:0[0x65]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, top first), 704x576 [SAR 16:11 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0:1[0x66](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 256 kb/s
       Stream #0:2[0x6a](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, mono, s16p, 64 kb/s (visual impaired) (dependent)
       Stream #0:3[0x69](eng): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
       Stream #0:4[0x98]: Audio: mp2 ([6][0][0][0] / 0x0006), 48000 Hz, stereo, s16p, 128 kb/s
       Stream #0:5[0x1c21]: Unknown: none ([11][0][0][0] / 0x000B)
       Stream #0:6[0x1c33]: Unknown: none ([11][0][0][0] / 0x000B)
       Stream #0:7[0x1bbf]: Unknown: none ([5][0][0][0] / 0x0005)
       Stream #0:8[0x1bc1]: Unknown: none ([5][0][0][0] / 0x0005)

    I’m unsure if there’s a way to achieve the same functionality with the CLI, however I’m open changing how my setup works.

    One idea that I considered was running two ffmpeg instances, one to strip the unneeded streams (and not find the stream info) and then have another that takes that stripped stream and performs the rest of the functionality.

    Any insight here would be grateful, thanks in advance.

  • Rotating videos with FFmpeg [closed]

    14 juin 2023, par jocull

    I have been trying to figure out how to rotate videos with FFmpeg. I am working with iPhone videos taken in portrait mode. I know how to determine the current degrees of rotation using MediaInfo (excellent library, btw) but I'm stuck on FFmpeg now.

    



    From what I've read, what you need to use is a vfilter option. According to what I see, it should look like this :

    



    ffmpeg -vfilters "rotate=90" -i input.mp4 output.mp4


    



    However, I can't get this to work. First, -vfilters doesn't exist anymore, it's now just -vf. Second, I get this error :

    



    No such filter: 'rotate'
Error opening filters!


    



    As far as I know, I have an all-options-on build of FFmpeg. Running ffmpeg -filters shows this :

    



    Filters:
anull            Pass the source unchanged to the output.
aspect           Set the frame aspect ratio.
crop             Crop the input video to x:y:width:height.
fifo             Buffer input images and send them when they are requested.
format           Convert the input video to one of the specified pixel formats.
hflip            Horizontally flip the input video.
noformat         Force libavfilter not to use any of the specified pixel formats
 for the input to the next filter.
null             Pass the source unchanged to the output.
pad              Pad input image to width:height[:x:y[:color]] (default x and y:
 0, default color: black).
pixdesctest      Test pixel format definitions.
pixelaspect      Set the pixel aspect ratio.
scale            Scale the input video to width:height size and/or convert the i
mage format.
slicify          Pass the images of input video on to next video filter as multi
ple slices.
unsharp          Sharpen or blur the input video.
vflip            Flip the input video vertically.
buffer           Buffer video frames, and make them accessible to the filterchai
n.
color            Provide an uniformly colored input, syntax is: [color[:size[:ra
te]]]
nullsrc          Null video source, never return images.
nullsink         Do absolutely nothing with the input video.


    



    Having the options for vflip and hflip are great and all, but they just won't get me where I need to go. I need to the ability to rotate videos 90 degrees at the very least. 270 degrees would be an excellent option to have as well. Where have the rotate options gone ?

    


  • Display ffmpeg frames on opgel texture

    21 mars 2018, par naki

    I am using Dranger tutorial01 (ffmpeg) to decode the video and get AVI frames. I want to use OpenGL to display the video.

    http://dranger.com/ffmpeg/tutorial01.html

    The main function is as follows :

    int main (int argc, char** argv) {
    // opengl stuff
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_RGBA);
    glutInitWindowSize(800, 600);
    glutCreateWindow("Hello GL");

    glutReshapeFunc(changeViewport);
    glutDisplayFunc(render);

    GLenum err = glewInit();
    if(GLEW_OK !=err){
       fprintf(stderr, "GLEW error");
       return 1;
    }

    glClear(GL_COLOR_BUFFER_BIT);


    glEnable(GL_TEXTURE_2D);
    GLuint texture;
    glGenTextures(1, &texture); //Make room for our texture
    glBindTexture(GL_TEXTURE_2D, texture);

    //ffmpeg stuff

    AVFormatContext *pFormatCtx = NULL;
    int             i, videoStream;
    AVCodecContext  *pCodecCtx = NULL;
    AVCodec         *pCodec = NULL;
    AVFrame         *pFrame = NULL;
    AVFrame         *pFrameRGB = NULL;
    AVPacket        packet;
    int             frameFinished;
    int             numBytes;
    uint8_t         *buffer = NULL;

    AVDictionary    *optionsDict = NULL;


    if(argc < 2) {
    printf("Please provide a movie file\n");
    return -1;
    }
    // Register all formats and codecs

    av_register_all();

    // Open video file
    if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
      return -1; // Couldn't open file

    // Retrieve stream information

    if(avformat_find_stream_info(pFormatCtx, NULL)<0)
    return -1; // Couldn't find stream information

    // Dump information about file onto standard error
    av_dump_format(pFormatCtx, 0, argv[1], 0);

    // Find the first video stream

    videoStream=-1;
    for(i=0; inb_streams; i++)
    if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
     videoStream=i;
     break;
    }
    if(videoStream==-1)
    return -1; // Didn't find a video stream

    // Get a pointer to the codec context for the video stream
    pCodecCtx=pFormatCtx->streams[videoStream]->codec;

    // Find the decoder for the video stream
    pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
    if(pCodec==NULL) {
      fprintf(stderr, "Unsupported codec!\n");
      return -1; // Codec not found
    }
    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, &optionsDict)<0)
      return -1; // Could not open codec

    // Allocate video frame
    pFrame=av_frame_alloc();

    // Allocate an AVFrame structure
    pFrameRGB=av_frame_alloc();
    if(pFrameRGB==NULL)
    return -1;

    // Determine required buffer size and allocate buffer
    numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                 pCodecCtx->height);
    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

    struct SwsContext      *sws_ctx = sws_getContext(pCodecCtx->width,
              pCodecCtx->height, pCodecCtx->pix_fmt, 800,
              600, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
              NULL, NULL);


    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture
    avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
        pCodecCtx->width, pCodecCtx->height);

    // Read frames and save first five frames to disk
    i=0;
    while(av_read_frame(pFormatCtx, &packet)>=0) {


    // Is this a packet from the video stream?
    if(packet.stream_index==videoStream) {
     // Decode video frame
     avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
              &packet);

     // Did we get a video frame?
     if(frameFinished) {
    // Convert the image from its native format to RGB
     /*  sws_scale
       (
           sws_ctx,
           (uint8_t const * const *)pFrame->data,
           pFrame->linesize,
           0,
           pCodecCtx->height,
           pFrameRGB->data,
           pFrameRGB->linesize
       );
      */
    sws_scale(sws_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
     // additional opengl
       glBindTexture(GL_TEXTURE_2D, texture);

           //gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data[0]);
      // glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 840, 460, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);

           glTexImage2D(GL_TEXTURE_2D,                //Always GL_TEXTURE_2D
               0,                            //0 for now
               GL_RGB,                       //Format OpenGL uses for image
               pCodecCtx->width, pCodecCtx->height,  //Width and height
               0,                            //The border of the image
               GL_RGB, //GL_RGB, because pixels are stored in RGB format
               GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
                               //as unsigned numbers
               pFrameRGB->data[0]);               //The actual pixel data
     // additional opengl end  

    // Save the frame to disk
    if(++i<=5)
     SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
           i);
     }
    }

    glColor3f(1,1,1);
    glBindTexture(GL_TEXTURE_2D, texture);
    glBegin(GL_QUADS);
       glTexCoord2f(0,1);
       glVertex3f(0,0,0);

       glTexCoord2f(1,1);
       glVertex3f(pCodecCtx->width,0,0);

       glTexCoord2f(1,0);
       glVertex3f(pCodecCtx->width, pCodecCtx->height,0);

       glTexCoord2f(0,0);
       glVertex3f(0,pCodecCtx->height,0);

    glEnd();
    // Free the packet that was allocated by av_read_frame
    av_free_packet(&packet);
    }


     // Free the RGB image
    av_free(buffer);
    av_free(pFrameRGB);

    // Free the YUV frame
    av_free(pFrame);

    // Close the codec
    avcodec_close(pCodecCtx);

    // Close the video file
    avformat_close_input(&pFormatCtx);

    return 0;
    }

    Unfortunately i could not find my solution here

    ffmpeg video to opengl texture

    The program compiles but does not show any video on the texture. Just a OpenGL window is created.