Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (13363)

  • How do I get an InputStream out of a Mono ?

    30 octobre 2022, par Einfari

    Bear with my noobness, I am learning web-flux. I had this simple application that takes a video and extract the audio using FFprobe and FFmpeg, so I thought of redoing it reactively, but I am failing miserably...

    


    Controller :

    


    @PostMapping("/upload")&#xA;public String upload(@RequestPart("file") Mono<filepart> filePartMono, final Model model) {&#xA;    Flux<string> filenameList = mediaComponent.extractAudio(filePartMono);&#xA;    model.addAttribute("filenameList", new ReactiveDataDriverContextVariable(filenameList));&#xA;    return "download";&#xA;}&#xA;</string></filepart>

    &#xA;

    Function to get audio streams out of the video :

    &#xA;

    public Mono<ffproberesult> getAudioStreams(InputStream inputStream) {&#xA;    try {&#xA;        return Mono.just(FFprobe.atPath(FFprobePath)&#xA;                .setShowStreams(true)&#xA;                .setSelectStreams(StreamType.AUDIO)&#xA;                .setLogLevel(LogLevel.INFO)&#xA;                .setInput(inputStream)&#xA;                .execute());&#xA;    } catch (JaffreeException e) {&#xA;        log.error(e.getMessage(), e);&#xA;        throw new MediaException("Audio formats could not be identified.");&#xA;    }&#xA;}&#xA;</ffproberesult>

    &#xA;

    Attempt 1 :

    &#xA;

    public Flux<string> extractAudio(Mono<filepart> filePartMono) {&#xA;    filePartMono.flatMapMany(Part::content)&#xA;            .map(dataBuffer -> dataBuffer.asInputStream(true))&#xA;            .flatMap(this::getAudioStreams)&#xA;            .subscribe(System.out::println);&#xA;    ...&#xA;}&#xA;</filepart></string>

    &#xA;

    Attempt 2 :

    &#xA;

    public Flux<string> extractAudio(Mono<filepart> filePartMono) {&#xA;    filePartMono.flatMapMany(Part::content)&#xA;            .reduce(InputStream.nullInputStream(), (inputStream, dataBuffer) -> new SequenceInputStream(&#xA;                    inputStream, dataBuffer.asInputStream()&#xA;            ))&#xA;            .flatMap(this::getAudioStreams)&#xA;            .subscribe(System.out::println);&#xA;    ...&#xA;}&#xA;</filepart></string>

    &#xA;

    Attempt 3 :

    &#xA;

    public Flux<string> extractAudio(Mono<filepart> filePartMono) {&#xA;    DataBufferUtils.write(filePartMono.flatMapMany(Part::content), OutputStream.nullOutputStream())&#xA;            .map(dataBuffer -> dataBuffer.asInputStream(true))&#xA;            .flatMap(this::getAudioStreams)&#xA;            .subscribe(System.out::println);&#xA;    ...&#xA;}&#xA;</filepart></string>

    &#xA;

    Attempt 1 and 3 seems to be the same in the end, FFprobe complains as follows :

    &#xA;

    2022-10-30 11:24:30.292  WARN 79049 --- [         StdErr] c.g.k.jaffree.process.BaseStdReader      : [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9162702340] [warning] STSZ atom truncated&#xA;2022-10-30 11:24:30.292 ERROR 79049 --- [         StdErr] c.g.k.jaffree.process.BaseStdReader      : [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9162702340] [error] stream 0, contradictionary STSC and STCO&#xA;2022-10-30 11:24:30.292 ERROR 79049 --- [         StdErr] c.g.k.jaffree.process.BaseStdReader      : [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9162702340] [error] error reading header&#xA;2022-10-30 11:24:30.294 ERROR 79049 --- [         StdErr] c.g.k.jaffree.process.BaseStdReader      : [error] tcp://127.0.0.1:51532: Invalid data found when processing input&#xA;2022-10-30 11:24:30.295  INFO 79049 --- [oundedElastic-3] c.g.k.jaffree.process.ProcessHandler     : Process has finished with status: 1&#xA;2022-10-30 11:24:30.409 ERROR 79049 --- [oundedElastic-3] c.e.s.application.MediaComponent         : Process execution has ended with non-zero status: 1. Check logs for detailed error message.&#xA;

    &#xA;

    Attempt 2 produces multiple of these :

    &#xA;

    Exception in thread "Runnable-0" java.lang.StackOverflowError&#xA;    at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:198)&#xA;

    &#xA;

    Could anybody point me in the right direction ? What am I doing wrong ? By the way, I am outputting to console just to see a result, but in the end I need to take all the outputted streams and pass them as arguments to another function that will finally extract the audio, so I need to figure out that as well.

    &#xA;

    Thank you in advance.

    &#xA;

  • FFMPEG watermark + hls stream

    20 novembre 2023, par Mohamed Taher

    i want to convert mp4 video into hls multi resolution (360 , 720) and add watermark image inside the output segments

    &#xA;

    i did two result but missing the watermark image with the multi scale

    &#xA;

    //one hls with water mark image&#xA;ffmpeg \&#xA;-i input.mp4 \&#xA;-i watermark.png \&#xA;-filter_complex "[0:v][1:v] overlay=10:10:format=auto,format=yuv420p" \&#xA;-filter_complex "[0:v][1:v] overlay=10:10:format=auto,format=yuv420p" \&#xA;-filter_complex "[0:v][1:v] overlay=10:10:format=auto,format=yuv420p" \&#xA;-c:a copy -b:a 128k -c:v libx264 -crf 23 \&#xA;-f hls \&#xA;-hls_time 3 \&#xA;-hls_flags independent_segments \&#xA;-master_pl_name "output.m3u8" "output-%v.m3u8"&#xA;

    &#xA;

    //multi hls with watermark text&#xA;ffmpeg \&#xA;-i ../input.mp4 \&#xA;-map 0:v:0 -map 0:a:0 -map 0:v:0 -map 0:a:0 \&#xA;-c:v:0 libx264 -crf 28 -preset faster -maxrate:v:0 600k -vf "drawtext=fontfile=font.ttf:text=&#x27;Your Watermark Text&#x27;:x=10:y=10:fontsize=24:fontcolor=white, scale=-1:360"  -b:a:0 64k \&#xA;-c:v:1 libx264 -crf 28 -preset faster -maxrate:v:0 3000k -vf "drawtext=fontfile=font.ttf:text=&#x27;Your Watermark Text&#x27;:x=10:y=10:fontsize=24:fontcolor=white, scale=-1:720" -b:a:0 64k \&#xA;-var_stream_map "v:0,a:0,name:360p v:1,a:1,name:460p" \&#xA;-f hls \&#xA;-hls_list_size 0 \&#xA;-threads 0 \&#xA;-hls_playlist_type event \&#xA;-hls_time 3 \&#xA;-hls_key_info_file ../keys/enc.keyinfo \&#xA;-hls_flags independent_segments \&#xA;-master_pl_name "output.m3u8" "output-%v.m3u8"&#xA;

    &#xA;

  • Realtime removal of carriage return in shell

    1er mai 2013, par Seth

    For context, I'm attempting to create a shell script that simplifies the realtime console output of ffmpeg, only displaying the current frame being encoded. My end goal is to use this information in some sort of progress indicator for batch processing.

    For those unfamiliar with ffmpeg's output, it outputs encoded video information to stdout and console information to stderr. Also, when it actually gets to displaying encode information, it uses carriage returns to keep the console screen from filling up. This makes it impossible to simply use grep and awk to capture the appropriate line and frame information.

    The first thing I've tried is replacing the carriage returns using tr :

    $ ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&amp;1 | tr &#39;\r&#39; &#39;\n&#39;

    This works in that it displays realtime output to the console. However, if I then pipe that information to grep or awk or anything else, tr's output is buffered and is no longer realtime. For example : $ ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&amp;1 | tr &#39;\r&#39; &#39;\n&#39;>log.txt results in a file that is immediately filled with some information, then 5-10 secs later, more lines get dropped into the log file.

    At first I thought sed would be great for this : $ # ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&amp;1 | sed &#39;s/\\r/\\n/&#39;, but it gets to the line with all the carriage returns and waits until the processing has finished before it attempts to do anything. I assume this is because sed works on a line-by-line basis and needs the whole line to have completed before it does anything else, and then it doesn't replace the carriage returns anyway. I've tried various different regex's for the carriage return and new line, and have yet to find a solution that replaces the carriage return. I'm running OSX 10.6.8, so I am using BSD sed, which might account for that.

    I have also attempted to write the information to a log file and use tail -f to read it back, but I still run into the issue of replacing carriage returns in realtime.

    I have seen that there are solutions for this in python and perl, however, I'm reluctant to go that route immediately. First, I don't know python or perl. Second, I have a completely functional batch processing shell application that I would need to either port or figure out how to integrate with python/perl. Probably not hard, but not what I want to get into unless I absolutely have to. So I'm looking for a shell solution, preferably bash, but any of the OSX shells would be fine.

    And if what I want is simply not doable, well I guess I'll cross that bridge when I get there.