Recherche avancée

Médias (17)

Mot : - Tags -/wired

Autres articles (60)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9801)

  • Short HLS MPEG2 Video Segments Do Not Play

    24 août 2019, par Jon H

    I’ve been attempting to cut a video into small segments (words), to be rearranged. While I’ve been able to do it with FFMPEG, cutting into segments and using the fast concat demuxer to reassemble the segments, I am trying to speed it up.

    I have been doing this by splitting the original video into short MPEG2 .ts segments for each word :

    ffmpeg -ss 1 -to 1.5 -i "source.mp4" -c:v libx264 -b:v 1200k -c:a aac -b:a 192k -hls_flags single_file "word.ts"

    I have then tried making a m3u8 playlist of these short video segments, but I found that only segments around 2 seconds or more, play at all.

    I then tried using the ’cat’ command to join these segments into a single file, which I understand should be possible with MPEG2 streams. However, this did not play all the segments either.

    To test if all the segments were present in this concatenated file, I used FFMPEG to convert it back into an MP4 file, and all the segments were present.

    I would appreciate any suggestions on producing the segments, and concatenating individual segments simply without FFMEPG. My project isn’t viable if having to call FFMPEG each time, but would work great if I can simply concatenate words together.

  • is there a faster way to extract various video clips from different sources and concatenate them, using moviepy ?

    27 août 2019, par user2627082

    So I’ve made a small script using moviepy to help me with my video editing process. It basically scans a bunch of subtitle files for specified words and the time duration when it occurs. With that it extracts that particular time duration from video files corresponding to the subtitle files. The extracted mp4 clips are all concatenated and written into one big composition.

    So it’s all running fine but it’s very slow. Can someone tell me it’s possible to make it faster. Am I doing something wrong ? Or is it normal for the process to be slow.

    import os,re
    from pathlib import Path
    from moviepy.editor import *
    import datetime


    def search(words_list, sub_list):

       for x in range(len(words_list)):
           print(words_list[x])
           clips = []
           clips.clear()
           for y in range(len(sub_list)):
               print(sub_list[y])
               stamps = []
               stamps.clear()


               with open(sub_list[y]) as f:
                   paragraphs = (paragraph.split("\n") for paragraph in
               f.read().split("\n\n"))

               for paragraph in paragraphs:
                   if any(words_list[x] in line.lower() for line in paragraph):
                       stamps.append(f"[{paragraph[1].strip()}]")

               videopath = str(sub_list[y]).replace("srt", "mp4").replace(":\\",
               ":\\\\")
               my_clip = VideoFileClip(videopath)


               for stamp in stamps:
                   print(stamp)
                   pre_stamp =  stamp[1:9]
                   post_stamp = stamp[18:26]
                   format = '%H:%M:%S'

                   pre_stamp  = str(datetime.datetime.strptime(pre_stamp, format)
                   -   datetime.timedelta(seconds=4))[11:19]
                   post_stamp = str(datetime.datetime.strptime(post_stamp,format)
                   + datetime.timedelta(seconds=4))[11:19]

                   trim_clip = my_clip.subclip(pre_stamp,post_stamp)
                   clips.append(trim_clip)
                   conc = concatenate_videoclips(clips)
           print(clips)
           conc.write_videofile("C:\\Users\Sri\PycharmProjects\subscrape\movies\\" + words_list[x] + "-comp.mp4")




    words = ["does what","spins","size"]

    subs = list(Path('C:\\Users\Sri\PycharmProjects\subscrape\movies').glob('**/*.srt'))

    search(words,subs)
  • Converting MSB padding to LSB padding

    23 novembre 2020, par Oier Lauzirika Zarrabeitia

    I am using FFmpeg and Vulkan for a video related project. The problem is that FFmpeg uses MSB padding for the 9, 10 and 12bit formats, whilst Vulkan accepts LSB padded pixel formats.

    


    I am using the following code to convert from one another, but the performance is terrible (1 second per frame), which cannot be accepted for video playback.

    


    if(paddingBits > 0) {
    //As padding bits are set to 0, using CPU words for the shifting should not affect the result
    assert(dstBuffers[plane].size() % sizeof(size_t) == 0);
    for(size_t i = 0; i < dstBuffers[plane].size(); i += sizeof(size_t)) {
        reinterpret_cast(dstBuffers[plane][i]) <<= paddingBits;
    }
}


    


    Note that this code will be executed 1-4 times per frame (once per plane) and dstBuffers[plane].size() will be in the order of a couple of MegaBytes. dstBuffers[plane]'s data has an alignment greater than size_t's, so I am not performing unaligned read/writes. Using a smaller type such as uint16_t makes it perform worse.

    


    My question is if there is any standard function (or inside the FFmpeg library) which implements the same behavior in a fancy way so that the performance is not a concern. If not, is there a more efficient way to implement it ?

    


    Edit :dstBuffers[plane] stores std::byte-s and although is not of type std::vector, (it is a custom class) it is contiguous.