Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (64)

  • Création définitive du canal

    12 mars 2010, par

    Lorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
    A la validation, vous recevez un email vous invitant donc à créer votre canal.
    Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
    A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (10895)

  • Different video players showing incorrect mp4 resolution after ffmpeg conversion

    18 novembre 2016, par Nova

    After getting help from http://stackoverflow.com/a/40601020/6318164 on how to convert webm to mp4. The result avoiding losing the video ratio by setting the height resolution with -vf scale=-2:720.

    I then came across another problem. I’ve found both width and height had to be supported for the video players, when I thought it was just the height that had to be specified.

    After browsing around I found this script http://stackoverflow.com/a/35487394/6318164 were I can change the video’s canvas to a common width and height standard. It shrinks the video to fit inside the center of specified canvas without losing the ratio while filling the empty space with black padding if I’m correct, which is the result I want.

    However, although it solved the playback problems in all the players, I’ve found different video players show different resolution information of the same video.

    I’ve modified the script here for Linux terminal use.

    X=1280; Y=720; ffmpeg -i old.webm -t 5 -vf "scale=min(iw*$Y/ih\,$X):min($Y\,ih*$X/iw),pad=$X:$Y:($X-iw)/2:($Y-ih)/2" new.mp4

    This is the research on the resolution differences I’ve found for value I set.

    X=1280; Y=720;

    webm          -> mp4
    =========================================================
    1280x752      -> 1280x720 X-plore (Android)
    Not supported -> 1339x720 Telegram (Android)
    1338x752      -> 1340x720 GNOME MPlayer (Linux)
    Not supported -> ???????? Built-in Video Player (Android)

    The question is, I’m I doing anything wrong with the ffmpeg conversion to return incorrect resolutions for different players ? I checked out some other videos I have and they show the correct resolutions except this converted one.

    Edit

    With the help of the accepted answer. This was my working output if anyone needs it :

    X=1280; Y=720; ffmpeg -i input.webm -vf "scale='if(gt(a*sar,16/9),${X},${Y}*iw*sar/ih)':'if(gt(a*sar,16/9),${X}*ih/iw/sar,${Y})',pad=${X}:${Y}:(ow-iw)/2:(oh-ih)/2,setsar=1" output.mp4
  • ffmpeg : recommended bitrate vs resolution [duplicate]

    15 octobre 2016, par Santhosh Yedidi

    This question already has an answer here :

    I have high resolution video

     Duration: 00:06:28.80, start: 0.000000, bitrate: 15968 kb/s
       Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 15809 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default)
       Metadata:
         creation_time   : 2016-10-11 05:35:02
         handler_name    : Alias Data Handler
         encoder         : AVC Coding
       Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 157 kb/s (default)
       Metadata:
         creation_time   : 2016-10-11 05:35:02
         handler_name    : Alias Data Handler

    Its 6+min video.

    I am ok with resolution of 240p (because i want to send it on whatsapp)

    In order my video to look good quality what is the recommended bitrate for 240p. Also is what is the minimum bitrate below which the chances of pixelating will be there in the video.

    I dont want to go for high bitrate also. Because ultimately i want the size to be not more than 240p.

    I use mpv to see the video. I scale the original video to 240p, the quality after conversion should match the quality visible in mpv. That will give me first hand idea of how good is the conversion.

    I expect a good amount of reduction in size(MB : where original is 740mb) of the file when reduced from 1920x1080 to 240p

    I have found some information regarding this.

    enter image description here

    How much is this true

  • avcodec_decode_video2 fails to decode after frame resolution change

    7 octobre 2016, par Krzysztof Kansy

    I’m using ffmpeg in Android project via JNI to decode real-time H264 video stream. On the Java side I’m only sending the the byte arrays into native module. Native code is running a loop and checking data buffers for new data to decode. Each data chunk is processed with :

    int bytesLeft = data->GetSize();
    int paserLength = 0;
    int decodeDataLength = 0;
    int gotPicture = 0;
    const uint8_t* buffer = data->GetData();
    while (bytesLeft > 0) {
       AVPacket packet;
       av_init_packet(&packet);
       paserLength = av_parser_parse2(_codecPaser, _codecCtx, &packet.data, &packet.size, buffer, bytesLeft, AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
       bytesLeft -= paserLength;
       buffer += paserLength;

       if (packet.size > 0) {
           decodeDataLength = avcodec_decode_video2(_codecCtx, _frame, &gotPicture, &packet);
       }
       else {
           break;
       }
       av_free_packet(&packet);
    }

    if (gotPicture) {
    // pass the frame to rendering
    }

    The system works pretty well until incoming video’s resolution changes. I need to handle transition between 4:3 and 16:9 aspect ratios. While having AVCodecContext configured as follows :

    _codecCtx->flags2|=CODEC_FLAG2_FAST;
    _codecCtx->thread_count = 2;
    _codecCtx->thread_type = FF_THREAD_FRAME;

    if(_codec->capabilities&CODEC_FLAG_LOW_DELAY){
       _codecCtx->flags|=CODEC_FLAG_LOW_DELAY;
    }

    I wasn’t able to continue decoding new frames after video resolution change. The got_picture_ptr flag that avcodec_decode_video2 enables when whole frame is available was never true after that.
    This ticket made me wonder if the issue isn’t connected with multithreading. Only useful thing I’ve noticed is that when I change thread_type to FF_THREAD_SLICE the decoder is not always blocked after resolution change, about half of my attempts were successfull. Switching to single-threaded processing is not possible, I need more computing power. Setting up the context to one thread does not solve the problem and makes the decoder not keeping up with processing incoming data.
    Everything work well after app restart.

    I can only think of one workoround (it doesn’t really solve the problem) : unloading and loading the whole library after stream resolution change (e.g as mentioned in here). I don’t think it’s good tho, it will propably introduce other bugs and take a lot of time (from user’s viewpoint).

    Is it possible to fix this issue ?

    EDIT :
    I’ve dumped the stream data that is passed to decoding pipeline. I’ve changed the resolution few times while stream was being captured. Playing it with ffplay showed that in moment when resolution changed and preview in application froze, ffplay managed to continue, but preview is glitchy for a second or so. You can see full ffplay log here. In this case video preview stopped when I changed resolution to 960x720 for the second time. (Reinit context to 960x720, pix_fmt: yuv420p in log).