Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (42)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (5197)

  • C++ ffmpeg video missing frames and won't play in Quicktime

    5 décembre 2019, par Oliver Dain

    I wrote some C++ code that uses ffmpeg to encode a video. I’m having two strange issues :

    1. The final video is always missing 1 frame. That is, if I have it encode 10 frames the final video only has 9 (at least that’s what ffprobe -show_frames -pretty $VIDEO | grep -F '[FRAME]' | wc -l tells me.
    2. The final video plays fine in some players (mpv and vlc) but not in Quicktime. Quicktime just shows a completely black screen.

    My code is roughly this (modified a bit to remove types that are unique to our code base) :

    First, I open the video file, write the headers and initialize things :

    template <class ptrt="ptrt">
    using UniquePtrWithDeleteFunction = std::unique_ptr>;


    std::unique_ptr<ffmpegencodingframesink> FfmpegEncodingFrameSink::Create(
       const std::string&amp; dest_url) {
     AVFormatContext* tmp_format_ctxt;
     auto alloc_format_res = avformat_alloc_output_context2(&amp;tmp_format_ctxt, nullptr, "mp4", dest_url.c_str());
     if (alloc_format_res &lt; 0) {
       throw FfmpegException("Error opening output file.");
     }
     auto format_ctxt = UniquePtrWithDeleteFunction<avformatcontext>(
         tmp_format_ctxt, CloseAvFormatContext);

     AVStream* out_stream_video = avformat_new_stream(format_ctxt.get(), nullptr);
     if (out_stream_video == nullptr) {
       throw FfmpegException("Could not create outputstream");
     }

     auto codec_context = GetCodecContext(options);
     out_stream_video->time_base = codec_context->time_base;

     auto ret = avcodec_parameters_from_context(out_stream_video->codecpar, codec_context.get());
     if (ret &lt; 0) {
       throw FfmpegException("Failed to copy encoder parameters to outputstream");
     }

     if (!(format_ctxt->oformat->flags &amp; AVFMT_NOFILE)) {
       ret = avio_open(&amp;format_ctxt->pb, dest_url.c_str(), AVIO_FLAG_WRITE);
       if (ret &lt; 0) {
         throw VideoDecodeException("Could not open output file: " + dest_url);
       }
     }

     ret = avformat_init_output(format_ctxt.get(), nullptr);
     if (ret &lt; 0) {
       throw FfmpegException("Unable to initialize the codec.");
     }

     ret = avformat_write_header(format_ctxt.get(), nullptr);
     if (ret &lt; 0) {
       throw FfmpegException("Error occurred writing format header");
     }

     return std::unique_ptr<ffmpegencodingframesink>(
         new FfmpegEncodingFrameSink(std::move(format_ctxt), std::move(codec_context)));
    }
    </ffmpegencodingframesink></avformatcontext></ffmpegencodingframesink></class>

    Then, every time I get a new frame to encode I pass it to this function (the frames are being decoded via ffmpeg from another mp4 file which Quicktime plays just fine) :

    // If frame == nullptr then we're done and we're just flushing the encoder
    // otherwise encode an actual frame
    void FfmpegEncodingFrameSink::EncodeAndWriteFrame(
       const AVFrame* frame) {
     auto ret = avcodec_send_frame(codec_ctxt_.get(), frame);
     if (ret &lt; 0) {
       throw FfmpegException("Error encoding the frame.");
     }

     AVPacket enc_packet;
     enc_packet.data = nullptr;
     enc_packet.size = 0;
     av_init_packet(&amp;enc_packet);

     do {
       ret = avcodec_receive_packet(codec_ctxt_.get(), &amp;enc_packet);
       if (ret ==  AVERROR(EAGAIN)) {
         CHECK(frame != nullptr);
         break;
       } else if (ret ==  AVERROR_EOF) {
         CHECK(frame == nullptr);
         break;
       } else if (ret &lt; 0) {
         throw FfmpegException("Error putting the encoded frame into the packet.");
       }

       assert(ret == 0);
       enc_packet.stream_index = 0;

       LOG(INFO) &lt;&lt; "Writing packet to stream.";
       av_interleaved_write_frame(format_ctxt_.get(), &amp;enc_packet);
       av_packet_unref(&amp;enc_packet);
     } while (ret == 0);
    }

    Finally, in my destructor I close everything up like so :

    FfmpegEncodingFrameSink::~FfmpegEncodingFrameSink() {
     // Pass a nullptr to EncodeAndWriteFrame so it flushes the encoder
     EncodeAndWriteFrame(nullptr);
     // write mp4 trailer
     av_write_trailer(format_ctxt_.get());
    }

    If I run this passing n frames to EncodeAndWriteFrame line LOG(INFO) &lt;&lt; "Writing packet to stream."; gets run n times indicating the n packets were written to the stream. But ffprobe always shows only n - 1 frames int he video. And the final video doesn’t play on quicktime.

    What am I doing wrong ??

  • How to improve Desktop capture performance and quality with ffmpeg [closed]

    6 novembre 2024, par Francesco Bramato

    I'm developing a game capture feature from my Electron app. I'm working on this since a while and tried a lot of different parameters combinations, now i'm running out of ideas :)

    &#xA;

    I've read tons of ffmpeg documentation, SO posts, other sites, but i'm not really a ffmpeg expert or video editing pro.

    &#xA;

    This is how it works now :

    &#xA;

    The app spawn an ffmpeg command based on user's settings :

    &#xA;

      &#xA;
    • Output format (mp4, mkv, avi)
    • &#xA;

    • Framerate (12, 24, 30, 60)
    • &#xA;

    • Codec (X264, NVidia NVENC, AMD AMF)
    • &#xA;

    • Bitrate (from 1000 to 10000kpbs)
    • &#xA;

    • Presets (for X264)
    • &#xA;

    • Audio output (a dshow device like StereoMix or VB-Cable) and Audio input (a dshow device like the Microphone)
    • &#xA;

    • Final Resolution (720p, 1080p, 2K, Original Size)
    • &#xA;

    &#xA;

    The command executed, as far, is :

    &#xA;

    ffmpeg.exe -nostats -hide_banner -hwaccel cuda -hwaccel_output_format cuda -f gdigrab -draw_mouse 0 -framerate 60 -offset_x 0 -offset_y 0 -video_size 2560x1440 -i desktop -f dshow -i audio=@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{D61FA53D-FA37-4BE7-BE2F-4005F94790BB} -ar 44100 -colorspace bt709 -color_trc bt709 -color_primaries bt709 -c:v h264_nvenc -b:v 6000k -preset slow -rc cbr -profile:v high -g 60 -acodec aac -maxrate 6000k -bufsize 12000k -pix_fmt yuv420p -f mpegts -&#xA;

    &#xA;

    one of the settings is the recording mode : full game session or replay buffer.&#xA;In case of full game session, the output is a file, for replay buffer is stdout.

    &#xA;

    The output format is mpegts because, as far i have read in a lot of places, the video stream can be cut in any moment.

    &#xA;

    Replays are cutted with different past and future duration based on game events.

    &#xA;

    In full game session, the replays are cutted directly from the mpegts.

    &#xA;

    In replay buffer mode, the ffmpeg stdout is redirect to the app that record the buffer (1 or 2 minutes), when the replay must be created, the app saves on the disk the buffer section according to past and future duration and with another ffmpeg command, copy it to a mp4 or mkv final file.

    &#xA;

    Generally speaking, this works reliably.

    &#xA;

    There are few issues :

    &#xA;

      &#xA;
    • nonetheless i ask ffmpeg to capture at 60fps, the final result is at 30fps (using -r 60 will speed up the final result)
    • &#xA;

    • some user has reported FPS drops in-game, specially when using NVidia NVENC (and having a NVIDIA GPU), using X264 seems save some FPS
    • &#xA;

    • colors are strange compared to original, what i see on screen, they seem washed out - i could have solved this using -colorspace bt709 -color_trc bt709 -color_primaries bt709 but don't know if is the right choice
    • &#xA;

    • NVIDIA NVenc with any other preset that is not slow creates videos terribly laggy
    • &#xA;

    &#xA;

    here two examples, 60 FPS, NVIDIA NVENC (slow, 6000kbs, MP4

    &#xA;

    Recorded by my app : https://www.youtube.com/watch?v=Msm62IwHdlk

    &#xA;

    Recorded by OB with nearly same settings : https://youtu.be/WuHoLh26W7E

    &#xA;

    Hope someone can help me

    &#xA;

    Thanks !

    &#xA;

  • Android ICS FFMPEG scale video not working

    20 avril 2015, par Android-Developer

    I am using Guardian Project Android Java FFMPEG library to resize videos. Current code which is working on android 5.0.1 / 5.1.0 / 4.4.4 :

       File fileTmp = getCacheDir();
       FfmpegController fc = null;
       try {
           fc = new FfmpegController(this, fileTmp);
       } catch (IOException e) {
           e.printStackTrace();
       }
       String path = Environment.getExternalStorageDirectory().getPath() + "/Movies/nexus.mp4";
       final String outPath = Environment.getExternalStorageDirectory().getPath() + "/Movies/test.mp4";
       final Clip out = new Clip(path);
       try {
           if (fc != null) {
               fc.convert(out, outPath, new ShellUtils.ShellCallback() {
                   @Override
                   public void shellOut(String shellLine) {
                       Log.e("", "SHELL OUT: " + shellLine);
                   }

                   @Override
                   public void processComplete(int exitValue) {
                       Log.e("", "PROCESS COMPLETE: " + exitValue);
                   }
               });
           }
       } catch (Exception e) {
           e.printStackTrace();
       }

    Using this code on Android 4.0.4 (Ice Cream Sandwich) doesn’t do anything. While testing on other devices exitValue in processComplete is always equal to 0, but on ICS it’s 11. Here is the output in LogCat :

    SHELL OUT: /data/data/org.hardartcore.ffmpeg/app_bin/ffmpeg -y -i /mnt/sdcard/Movies/NEXUS.mp4 -ab 160k -r ntsc-film -vf scale=568:320 -strict -2 /mnt/sdcard/Movies/Test.mp4
    PROCESS COMPLETE: 11

    I don’t think it’s something from ffmpeg, more like a problem when the library is trying to execute ffmpeg executable from raw folder in internal memory, but I can’t see any logs or errors which indicate that too.

    So my question is, if there is any mistake which I am doing using / running this code or something which can prevent ffmpeg executable from running on old devices with Android ICS ?

    Thanks in advance !