Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (49)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10558)

  • Video size optimization

    21 janvier 2020, par Heba Gamal Eldin

    I’m working on a task that should optimize the video’s size before uploading to the server in a web application.
    So i want my model to automatically optimize each input video size.

    I have many trials in different approaches like FFmpeg :
    I used libx265, h264 and lib265 as a codec, with some videos it increases the video size and others minimize it with little ratio and it takes so long to generate the output file.

    for example with video of size 8M

    input = {input_name: None}
    output = {output_name: '-n preset faster -vcodec libx265 -crf 28'}

    The output file is 10M.

    And also tried OpenCV :
    But the output videos aren’t written the file appears 0kb size.

    ex : with input video resolution (1280×544)
    I want to down scale it by :

    cap= cv2.VideoCapture(file_name)
    cap.set(3,640)
    cap.set(4,480)
    codec = cv2.VideoWriter_fourcc(*'XDIV')
    out = cv2.VideoWriter(output_file, codec, 28.0 , (640,480))
    While cap.isOpened():
      bol, frame = cap.read()
      out.write(frame)
      cv2.imshow('video', frame)

    I become little bit confused. what are the parameters of the input,output videos i should consider to be optimized and make a vital size change in each specific video ? Is it the codec, width, height only ?

    What is the most effective approach to do this ?

    Should i build a predictive model for estimating the suitable output video file parameters or there is a method that auto adjust ?

    if there’s an illustrative example please provide me.

  • How to solve javacv ExceptionInInitializerError [closed]

    6 juin, par shihui wei

    I want to combine multiple images into a gif. I chose ffmpeg. I learned that there are binary dependencies including ffmpeg in java, so I chose javacv

    


    First, I added the dependency of javacv to my pom file:

    


            <dependency>&#xA;            <groupid>org.bytedeco</groupid>&#xA;            <artifactid>javacv-platform</artifactid>&#xA;            <version>1.5.11</version>&#xA;        </dependency>&#xA;

    &#xA;

    Then I wrote a code to synthesize GIF from multiple images.

    &#xA;

    public static byte[] encodeGif(List<bufferedimage> frames, int frameDelayMs, boolean loopForever) {&#xA;&#xA;        if (frames == null || frames.isEmpty()) {&#xA;            throw new IllegalArgumentException("frames 不能为空");&#xA;        }&#xA;        int width = frames.get(0).getWidth();&#xA;        int height = frames.get(0).getHeight();&#xA;&#xA;        // ByteArrayOutputStream &#x2B; FFmpegFrameRecorder&#xA;        try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {&#xA;            &#xA;            try (FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(baos, width, height)) {&#xA;                recorder.setFormat("gif");&#xA;                recorder.setFrameRate(1000.0 / frameDelayMs);&#xA;&#xA;                recorder.setOption("loop", loopForever ? "0" : "1");&#xA;&#xA;                recorder.start();&#xA;&#xA;                Java2DFrameConverter converter = new Java2DFrameConverter();&#xA;                for (BufferedImage img : frames) {&#xA;                    // 把 BufferedImage 转成 Frame&#xA;                    Frame frame = converter.convert(img);&#xA;                    recorder.record(frame);&#xA;                }&#xA;&#xA;                recorder.stop();&#xA;            }&#xA;            return baos.toByteArray();&#xA;        } catch (Exception e) {&#xA;            log.error("FastGifUtil.encodeGif 失败", e);&#xA;            throw new RuntimeException("生成 GIF 失败", e);&#xA;        }&#xA;&#xA;    }&#xA;</bufferedimage>

    &#xA;

    Finally, I have prepared data to test, Below is my test code:

    &#xA;

        public void testGenerateGif() {&#xA;        log.info(">>>>>>>>>>> start get bufferedImage &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;");&#xA;        List<bufferedimage> bufferedImages = batchTaskUtils.batchIOTask(urls, url -> {&#xA;            byte[] byteData = imageClientUtils.byteData(url);&#xA;            return OpencvUtils.byteToBufferedImage(byteData);&#xA;        });&#xA;        log.info(">>>>>>>>>>> start generate gif &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;");&#xA;        long time = System.currentTimeMillis();&#xA;        byte[] bytes = GifUtils.encodeGif(bufferedImages, 50, true);&#xA;        log.info("{}", System.currentTimeMillis() - time);&#xA;        log.info(">>>>>>>>>>> start upload gif &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;");&#xA;        String upload = upload(bytes);&#xA;        log.info("{}", upload);&#xA;&#xA;    }&#xA;</bufferedimage>

    &#xA;

    However, I encountered a difficult problem : I cannot load the FFmpegFrameRecorder class. The exception error is :

    &#xA;

    java.lang.ExceptionInInitializerError&#xA;    at org.bytedeco.javacv.FFmpegFrameRecorder.<clinit>(FFmpegFrameRecorder.java:356)&#xA;    at com.kuaishou.qa.utils.GifUtils.encodeGif(GifUtils.java:29)&#xA;    at com.kuaishou.qa.AnimationDiffTest.testGenerateGif(AnimationDiffTest.java:88)&#xA;    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&#xA;    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)&#xA;    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&#xA;    at java.base/java.lang.reflect.Method.invoke(Method.java:566)&#xA;    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)&#xA;    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)&#xA;    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)&#xA;    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)&#xA;</clinit>

    &#xA;

  • "Missing reference picture" error when saving rtsp stream with ffmpeg

    4 mars 2020, par Cédric Kamermans

    I want to record 10 seconds of video with an IP camera via ffmpeg. The output video looks fine but i get a bunch of "Missing reference picture" errors in the log. This only happens in the beginning of the process. I also get the warning "circular_buffer_size is not supported on this build".

    I started of with the following code :

    -y -i rtsp://username:password@IP:88/videoMain -t 10 ffmpeg_capture.mp4

    But this resulted in the output being corrupted in the beginning.
    I found the following code on a forum and this seems to fix that problem. The errors still remain though.

    -y -i rtsp://username:password@IP:88/videoMain -b 900k -vcodec copy -r 60 -t 10 ffmpeg_capture.mp4

    One thing to note is that currently we’re using a C2 V3 IP camera. This model is just for testing, we will upgrade to a better model when we get this working.

    I want to clarify that i’m just beginning to use ffmpeg, so I don’t quite understand it yet. It would be greatly appreciated if someone could provide an example code of how I can fix this problem.

    Thanks in advance !