Recherche avancée

Médias (91)

Autres articles (40)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5597)

  • How to combine/concatenate videos stored in AWS S3 bucket based on title of the file name

    9 juin 2020, par orangecube

    I am using a service that allows me to record videos that get automatically pushed to a folder (submissions) in an S3 bucket. There are multiple videos however they need to be grouped together and concatenated so the output is one video per group.

    



    So, basically, any tips on how I can take videos based on the title and stitch them together ?

    



    Example :

    



    Submissions folder will have :

    



    a-100-2.mp4
a-200-6.mp4
b-123-5.mp4


    



    Expected output in processed folder :

    



    a.mp4     - (both 'a' videos get stitched together)
b.mp4     - (only 'b' gets sent over since there is only one video.)


    



    Thanks in advance !

    



    Edit : Some additional and detailed information below if it helps.

    



    The files will be labeled with :
name-location-video_token-stream_token.mp4

    



    Need help creating a script or process that will concatenate the videos using the procedure outlined below :

    



    Processing rules (back end) :

    



      

    1. Check if videos have same video_token in ‘submissions folder’. If so, keep the newest one and delete old ones.

    2. 


    3. Take all videos in ‘submissions folder’ with same name and location in title and concatenate the videos. Save output video to a new folder in the bucket labeled as the location for the folder name. 
Output file name : 
name-location-year.mp4.

    4. 


    



    EXAMPLE :

    



    Submissions folder :
joey-toronto-001-354.mp4

    



    joey-toronto-001-241.mp4 - this will be deleted

    



    joey-toronto-103-452.mp4

    



    alex-montreal-352-232.mp4

    



    alex-montreal-452-223.mp4

    



    Resulting output :

    



    Toronto folder :

    



    Joey-toronto-2020.mp4

    



    Montreal folder :

    



    Alex-montreal-2020.mp4

    


  • Why PyAudio doesn't read 'mp3' ?

    22 octobre 2020, par freshITmeat

    I tried to read file that I give with absolute path.
When I run my code first that I see is this message :

    


    D:\prog\datascience\anaconda\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
  warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)


    


    I tried this :

    


    PATH_TO_FFMPEG = 'D:\\prog\\ffmpeg-win-2.2.2\\ffmpeg.exe'
pydub.AudioSegment.converter = r'D:\\prog\\ffmpeg-win-2.2.2\\ffmpeg.exe'


    


    And I separately installed ffmpeg with pip. But it didn't help.
When I try this :

    


    raw_sound = pydub.AudioSegment.from_mp3(file=track_path)


    


    where track_path is correct absolute path generated automatically.
So I got this this error :

    


    Traceback (most recent call last):&#xA;  File "D:\prog\PyCharm Community Edition 2020.2.2\plugins\python-ce\helpers\pydev\pydevd.py", line 1448, in _exec&#xA;    pydev_imports.execfile(file, globals, locals)  # execute the script&#xA;  File "D:\prog\PyCharm Community Edition 2020.2.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile&#xA;    exec(compile(contents&#x2B;"\n", file, &#x27;exec&#x27;), glob, loc)&#xA;  File "D:/testtask2/test_task/testtask/get_mffc.py", line 165, in <module>&#xA;    slice_all_in_a_dir(&#x27;May 27 2020 LNC/Hydrophone 1/raw_records&#x27;)&#xA;  File "D:/testtask2/test_task/testtask/get_mffc.py", line 70, in slice_all_in_a_dir&#xA;    slice_samples(track_path= [file],&#xA;  File "D:/testtask2/test_task/testtask/get_mffc.py", line 48, in slice_samples&#xA;    raw_sound = pydub.AudioSegment.from_mp3(file=track_path)&#xA;  File "D:\prog\datascience\anaconda\lib\site-packages\pydub\audio_segment.py", line 738, in from_mp3&#xA;    return cls.from_file(file, &#x27;mp3&#x27;, parameters=parameters)&#xA;  File "D:\prog\datascience\anaconda\lib\site-packages\pydub\audio_segment.py", line 680, in from_file&#xA;    stdin_data = file.read()&#xA;AttributeError: &#x27;list&#x27; object has no attribute &#x27;read&#x27;&#xA;python-BaseException&#xA;</module>

    &#xA;

  • Can't overlay a WebM transpartent video on top of a MP4 using ffmpeg [closed]

    27 octobre 2020, par Hervé

    I have been struggling for hours trying to overlay a WebM transpartent video on top of a MP4 using ffmpeg. The two videos have the same duration. It should therefore be easy but I am getting desperate.

    &#xA;

    "c:\Program Files\ffmpeg\bin\ffmpeg.exe" ^&#xA;-i drone.mp4 ^&#xA;-i trans.webm -pix_fmt yuv444p  ^&#xA;-filter_complex "[1:v]format=rgba,colorchannelmixer=aa=0.5[trans];[0:v][trans]overlay=10:10" ^&#xA;out.mp4&#xA;

    &#xA;

    I tried MANY MANY different options, formats, but kept obtaining the same outcome : ffmpeg simply ignores the alpha channel of the trans.webm file and considers its background as black. I used the colorchannelmixer=aa=0.5 and the 10:10 offset to actually see the problem. My goal is really to keep the original subtle alpha channel of trans.webm, NOT to set the black color to transparent.

    &#xA;

    Some information about the two files :

    &#xA;

    drone.mp4

    &#xA;

        Metadata:&#xA;    creation_time   : 2020-10-27T10:18:38.000000Z&#xA;    handler_name    : VideoHandler&#xA;    encoder         : h264&#xA;    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 191 kb/s (default)&#xA;    Metadata:&#xA;    creation_time   : 2020-10-27T10:18:38.000000Z&#xA;    handler_name    : SoundHandler&#xA;

    &#xA;

    trans.webm

    &#xA;

        Input #0, matroska,webm, from &#x27;trans.webm&#x27;:&#xA;    Metadata:&#xA;    ENCODER         : Lavf58.51.100&#xA;    Duration: 00:00:27.00, start: 0.000000, bitrate: 28 kb/s&#xA;    Stream #0:0: Video: vp9 (Profile 0), yuv420p(tv), 1920x1080, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 1k tbn, 1k tbc (default)&#xA;    Metadata:&#xA;    alpha_mode      : 1&#xA;    ENCODER         : Lavc58.100.100 libvpx-vp9&#xA;    DURATION        : 00:00:27.000000000&#xA;

    &#xA;