Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (46)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (6635)

  • WebRTC Detect Orientation from Peer Video Stream

    30 mars 2018, par Daryl

    Using RecordRTC and WebRTC I am able to save a Peer MediaStream as a webm/video file. The orientation of the file differs based on the peer device. Meaning a 640x480 video is rotated clockwise or counter clockwise.

    To re-orient the recorded video, some of the videos need to be rotated "clockwise" and others need to be rotated "counter clockwise". I need a way to determine which direction to rotate the video. Otherwise, some videos will be right side up and others upside down.

    I attempted to look up the orientation, using ffprobe. However, the videos created do not have the "rotate" tag in the metadata.

    I have also been unable to find the orientation in the Peer video stream of the WebRTC object.

    I really thought that since the Video Element has the WebRTC Peer Stream displayed correctly in it, that I should be able to get the orientation from it.

  • Display ffmpeg frames on opgel texture

    21 mars 2018, par naki

    I am using Dranger tutorial01 (ffmpeg) to decode the video and get AVI frames. I want to use OpenGL to display the video.

    http://dranger.com/ffmpeg/tutorial01.html

    The main function is as follows :

    int main (int argc, char** argv) {
    // opengl stuff
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_RGBA);
    glutInitWindowSize(800, 600);
    glutCreateWindow("Hello GL");

    glutReshapeFunc(changeViewport);
    glutDisplayFunc(render);

    GLenum err = glewInit();
    if(GLEW_OK !=err){
       fprintf(stderr, "GLEW error");
       return 1;
    }

    glClear(GL_COLOR_BUFFER_BIT);


    glEnable(GL_TEXTURE_2D);
    GLuint texture;
    glGenTextures(1, &texture); //Make room for our texture
    glBindTexture(GL_TEXTURE_2D, texture);

    //ffmpeg stuff

    AVFormatContext *pFormatCtx = NULL;
    int             i, videoStream;
    AVCodecContext  *pCodecCtx = NULL;
    AVCodec         *pCodec = NULL;
    AVFrame         *pFrame = NULL;
    AVFrame         *pFrameRGB = NULL;
    AVPacket        packet;
    int             frameFinished;
    int             numBytes;
    uint8_t         *buffer = NULL;

    AVDictionary    *optionsDict = NULL;


    if(argc < 2) {
    printf("Please provide a movie file\n");
    return -1;
    }
    // Register all formats and codecs

    av_register_all();

    // Open video file
    if(avformat_open_input(&pFormatCtx, argv[1], NULL, NULL)!=0)
      return -1; // Couldn't open file

    // Retrieve stream information

    if(avformat_find_stream_info(pFormatCtx, NULL)<0)
    return -1; // Couldn't find stream information

    // Dump information about file onto standard error
    av_dump_format(pFormatCtx, 0, argv[1], 0);

    // Find the first video stream

    videoStream=-1;
    for(i=0; inb_streams; i++)
    if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
     videoStream=i;
     break;
    }
    if(videoStream==-1)
    return -1; // Didn't find a video stream

    // Get a pointer to the codec context for the video stream
    pCodecCtx=pFormatCtx->streams[videoStream]->codec;

    // Find the decoder for the video stream
    pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
    if(pCodec==NULL) {
      fprintf(stderr, "Unsupported codec!\n");
      return -1; // Codec not found
    }
    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, &optionsDict)<0)
      return -1; // Could not open codec

    // Allocate video frame
    pFrame=av_frame_alloc();

    // Allocate an AVFrame structure
    pFrameRGB=av_frame_alloc();
    if(pFrameRGB==NULL)
    return -1;

    // Determine required buffer size and allocate buffer
    numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                 pCodecCtx->height);
    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

    struct SwsContext      *sws_ctx = sws_getContext(pCodecCtx->width,
              pCodecCtx->height, pCodecCtx->pix_fmt, 800,
              600, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
              NULL, NULL);


    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture
    avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
        pCodecCtx->width, pCodecCtx->height);

    // Read frames and save first five frames to disk
    i=0;
    while(av_read_frame(pFormatCtx, &packet)>=0) {


    // Is this a packet from the video stream?
    if(packet.stream_index==videoStream) {
     // Decode video frame
     avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
              &packet);

     // Did we get a video frame?
     if(frameFinished) {
    // Convert the image from its native format to RGB
     /*  sws_scale
       (
           sws_ctx,
           (uint8_t const * const *)pFrame->data,
           pFrame->linesize,
           0,
           pCodecCtx->height,
           pFrameRGB->data,
           pFrameRGB->linesize
       );
      */
    sws_scale(sws_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
     // additional opengl
       glBindTexture(GL_TEXTURE_2D, texture);

           //gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data[0]);
      // glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 840, 460, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);

           glTexImage2D(GL_TEXTURE_2D,                //Always GL_TEXTURE_2D
               0,                            //0 for now
               GL_RGB,                       //Format OpenGL uses for image
               pCodecCtx->width, pCodecCtx->height,  //Width and height
               0,                            //The border of the image
               GL_RGB, //GL_RGB, because pixels are stored in RGB format
               GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
                               //as unsigned numbers
               pFrameRGB->data[0]);               //The actual pixel data
     // additional opengl end  

    // Save the frame to disk
    if(++i<=5)
     SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
           i);
     }
    }

    glColor3f(1,1,1);
    glBindTexture(GL_TEXTURE_2D, texture);
    glBegin(GL_QUADS);
       glTexCoord2f(0,1);
       glVertex3f(0,0,0);

       glTexCoord2f(1,1);
       glVertex3f(pCodecCtx->width,0,0);

       glTexCoord2f(1,0);
       glVertex3f(pCodecCtx->width, pCodecCtx->height,0);

       glTexCoord2f(0,0);
       glVertex3f(0,pCodecCtx->height,0);

    glEnd();
    // Free the packet that was allocated by av_read_frame
    av_free_packet(&packet);
    }


     // Free the RGB image
    av_free(buffer);
    av_free(pFrameRGB);

    // Free the YUV frame
    av_free(pFrame);

    // Close the codec
    avcodec_close(pCodecCtx);

    // Close the video file
    avformat_close_input(&pFormatCtx);

    return 0;
    }

    Unfortunately i could not find my solution here

    ffmpeg video to opengl texture

    The program compiles but does not show any video on the texture. Just a OpenGL window is created.

  • Rotating videos with FFmpeg [closed]

    14 juin 2023, par jocull

    I have been trying to figure out how to rotate videos with FFmpeg. I am working with iPhone videos taken in portrait mode. I know how to determine the current degrees of rotation using MediaInfo (excellent library, btw) but I'm stuck on FFmpeg now.

    



    From what I've read, what you need to use is a vfilter option. According to what I see, it should look like this :

    



    ffmpeg -vfilters "rotate=90" -i input.mp4 output.mp4


    



    However, I can't get this to work. First, -vfilters doesn't exist anymore, it's now just -vf. Second, I get this error :

    



    No such filter: 'rotate'
Error opening filters!


    



    As far as I know, I have an all-options-on build of FFmpeg. Running ffmpeg -filters shows this :

    



    Filters:
anull            Pass the source unchanged to the output.
aspect           Set the frame aspect ratio.
crop             Crop the input video to x:y:width:height.
fifo             Buffer input images and send them when they are requested.
format           Convert the input video to one of the specified pixel formats.
hflip            Horizontally flip the input video.
noformat         Force libavfilter not to use any of the specified pixel formats
 for the input to the next filter.
null             Pass the source unchanged to the output.
pad              Pad input image to width:height[:x:y[:color]] (default x and y:
 0, default color: black).
pixdesctest      Test pixel format definitions.
pixelaspect      Set the pixel aspect ratio.
scale            Scale the input video to width:height size and/or convert the i
mage format.
slicify          Pass the images of input video on to next video filter as multi
ple slices.
unsharp          Sharpen or blur the input video.
vflip            Flip the input video vertically.
buffer           Buffer video frames, and make them accessible to the filterchai
n.
color            Provide an uniformly colored input, syntax is: [color[:size[:ra
te]]]
nullsrc          Null video source, never return images.
nullsink         Do absolutely nothing with the input video.


    



    Having the options for vflip and hflip are great and all, but they just won't get me where I need to go. I need to the ability to rotate videos 90 degrees at the very least. 270 degrees would be an excellent option to have as well. Where have the rotate options gone ?