Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (27)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (7112)

  • Broken pipe error when writing video file with moviepy in azure [closed]

    6 juin 2024, par Lydia

    I have a program that retrieves images (png) and audio files from Azure Blob Storage to merge them into a video, which is then written to a temporary file and saved back to Blob Storage. I'm coding in Python and here is code i use to do that :

    


        def merge_image_audio(azure_connection_string):
        """Merge PNG with mp3 files."""
        # get Azure Container Client 
        container_client = get_container_client(azure_connection_string, AZURE_CONTAINER_NAME)
    
        # List blobs in the "temp" folder
        blob_list = container_client.list_blobs(name_starts_with="temp/")
        # Initialize lists to store blob data
        image_blob_data_list = []
        audio_blob_data_list = []
    
        # Download PNG and MP3 files and store blob data in the lists
        for blob in blob_list:
            if blob.name.endswith('.png'):
                image_blob_data_list.append(get_blob(azure_connection_string, blob, '.png'))
    
            elif blob.name.endswith(".mp3"):
                audio_blob_data_list.append(get_blob(azure_connection_string, blob, '.mp3'))
        
        clips = []
        # Merge images and audio files
        for image, audio in zip(image_blob_data_list, audio_blob_data_list):
            image_clip = ImageClip(image).set_duration(AudioFileClip(audio).duration)
            image_clip = image_clip.set_audio(AudioFileClip(audio))
            clips.append(image_clip)
        
        # concatenate all clips 
        final_clip = concatenate_videoclips(clips)
       
        with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as video_temp_file:
                try: 
                    temp_file_name = video_temp_file.name
                    final_clip.write_videofile(temp_file_name, fps=24, codec='libx264', audio_codec='mp3')
                except OSError as e :
                    logging.error(f"Failed: {e}", exc_info=True)
        
    
        current_datetime = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M")
        final_video_name = current_datetime + FINAL_VIDEO
        tmp_blob_client = get_blob_client(azure_connection_string, AZURE_CONTAINER_NAME, final_video_name)
        with open(temp_file_name, 'rb') as video_data:
            tmp_blob_client.upload_blob(video_data, overwrite=True)
        video_temp_file.close()


    


    I have containerized my code, and the Docker image works perfectly on my machine. However, once deployed on Azure, I encounter this problem with writing the video :

    


    Failed: [Errno 32] Broken pipe MoviePy error: FFMPEG encountered the following error while writing file /tmp/tmp81o22bka.mp4: b'' Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 136, in write_frame self.proc.stdin.write(img_array.tobytes()) BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/site/wwwroot/function_app.py", line 102, in generate_simple_video image_clip.write_videofile(temp_file_name, fps=24) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 54, in requires_duration return f(clip, *a, **k) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 135, in use_clip_fps_by_default return f(clip, *new_a, **new_kw) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB return f(clip, *a, **k) File "/usr/local/lib/python3.8/site-packages/moviepy/video/VideoClip.py", line 300, in write_videofile ffmpeg_write_video(self, filename, fps, codec, File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 228, in ffmpeg_write_video writer.write_frame(frame) File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 180, in write_frame raise IOError(error) OSError: [Errno 32] Broken pipe MoviePy error: FFMPEG encountered the following error while writing file /tmp/tmp81o22bka.mp4: b''


    


    From my online research, everyone suggests that it’s a resource issue (lack of RAM and CPU). I increased these resources in the Azure Function App configuration, but I still face the same problem.

    


    I took a step-by-step approach to check for compatibility issues with the MoviePy function. I created a small 30-second video without audio, and it worked. Then, I added more options such as more images, audio, etc., but it failed.

    


    I suspected a timeout issue knowing that azure function app has a 5 min timeout that can be increased to 10 min in the consumption plan, but even with an execution time of one minute only it still fails.

    


    I am out of ideas to test and really need help.

    


    Thank you in advance.

    


  • aacdec_usac : rename spectrum decode function and remove unused arg

    12 juin 2024, par Lynne
    aacdec_usac : rename spectrum decode function and remove unused arg
    

    The LC part of the decoder combines scalefactor application with
    spectrum decoding, and this was the plan here, but that's not possible,
    so change the function name.

    • [DH] libavcodec/aac/aacdec_usac.c
  • Streaming playlist with browser overlay [closed]

    28 juin 2024, par Tchoune

    Do you have any idea how I can stream a video playlist on twitch (with ffmpeg or another lib) and overlay a web page (with sub twitch alerts for example).

    


    I also need to be aware that my system needs to be multi-user. A user can stream on 1 to n different twitch channels. (multi instance).

    


    For my production, I plan to use linux server without GUI. I've been looking for a solution for 4 months, but I've run out of ideas.

    


    I've already tried xvfb to create a virtual desktop and display a chorimum browser, but it's not effective for production.
I've tried the whole pupeertee thing but it's not usable either.

    


    And my backend server is under nodejs with adonisjs.
I'm currently using ffmpeg to broadcast a video playlist with m3u8 :

    


    startStream(): number {
let parameters = [
  '-nostdin',
  '-re',
  '-f',
  'concat',
  '-safe',
  '0',
  '-vsync',
  'cfr',
  '-i',
  `concat:${app.publicPath(this.timelinePath)}`,
]

let filterComplex = ''

if (this.logo) {
  parameters.push('-i', app.publicPath(this.logo))
  filterComplex += '[1:v]scale=200:-1[logo];[0:v][logo]overlay=W-w-5:5[main];'
} else {
  filterComplex += '[0:v]'
}

if (this.overlay) {
  parameters.push('-i', app.publicPath(this.overlay))
  filterComplex += '[2:v]scale=-1:ih[overlay];[main][overlay]overlay=0:H-h[main];'
}

filterComplex += `[main]drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:textfile=${app.publicPath(this.guestFile)}:reload=1:x=(w-text_w)/2:y=h-text_h-10:fontsize=18:fontcolor=white[main]; [main]drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:text='%{localtime\\:%X}':x=10:y=h-text_h-10:fontsize=16:fontcolor=white`

parameters.push(
  '-filter_complex',
  filterComplex,
  '-copyts',
  '-pix_fmt',
  'yuv420p',
  '-s',
  '1920x1080',
  '-c:v',
  'libx264',
  '-profile:v',
  'high',
  '-preset',
  'veryfast',
  '-b:v',
  '6000k',
  '-maxrate',
  '7000k',
  '-minrate',
  '5000k',
  '-bufsize',
  '9000k',
  '-g',
  '120',
  '-r',
  '60',
  '-c:a',
  'aac',
  '-f',
  'flv',
  `${this.baseUrl}/${encryption.decrypt(this.streamKey)}`
)

this.instance = spawn('ffmpeg', parameters, {
  detached: true,
  stdio: ['ignore', 'pipe', 'pipe'],
})


    


    I've thought of using Webrtc, but it doesn't seem to meet my needs.

    


    I know that Gstreamer has wpeWebKit or wpesrc to do this, but there's no nodejs wrapper and above all it doesn't take playlist input (m3u8 or txt) into account...

    


    If anyone has any new ideas, I'd be very grateful.