Recherche avancée

Médias (91)

Autres articles (108)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (6893)

  • Broken pipe error when writing video file with moviepy in azure [closed]

    6 juin 2024, par Lydia

    I have a program that retrieves images (png) and audio files from Azure Blob Storage to merge them into a video, which is then written to a temporary file and saved back to Blob Storage. I'm coding in Python and here is code i use to do that :

    


        def merge_image_audio(azure_connection_string):
        """Merge PNG with mp3 files."""
        # get Azure Container Client 
        container_client = get_container_client(azure_connection_string, AZURE_CONTAINER_NAME)
    
        # List blobs in the "temp" folder
        blob_list = container_client.list_blobs(name_starts_with="temp/")
        # Initialize lists to store blob data
        image_blob_data_list = []
        audio_blob_data_list = []
    
        # Download PNG and MP3 files and store blob data in the lists
        for blob in blob_list:
            if blob.name.endswith('.png'):
                image_blob_data_list.append(get_blob(azure_connection_string, blob, '.png'))
    
            elif blob.name.endswith(".mp3"):
                audio_blob_data_list.append(get_blob(azure_connection_string, blob, '.mp3'))
        
        clips = []
        # Merge images and audio files
        for image, audio in zip(image_blob_data_list, audio_blob_data_list):
            image_clip = ImageClip(image).set_duration(AudioFileClip(audio).duration)
            image_clip = image_clip.set_audio(AudioFileClip(audio))
            clips.append(image_clip)
        
        # concatenate all clips 
        final_clip = concatenate_videoclips(clips)
       
        with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as video_temp_file:
                try: 
                    temp_file_name = video_temp_file.name
                    final_clip.write_videofile(temp_file_name, fps=24, codec='libx264', audio_codec='mp3')
                except OSError as e :
                    logging.error(f"Failed: {e}", exc_info=True)
        
    
        current_datetime = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M")
        final_video_name = current_datetime + FINAL_VIDEO
        tmp_blob_client = get_blob_client(azure_connection_string, AZURE_CONTAINER_NAME, final_video_name)
        with open(temp_file_name, 'rb') as video_data:
            tmp_blob_client.upload_blob(video_data, overwrite=True)
        video_temp_file.close()


    


    I have containerized my code, and the Docker image works perfectly on my machine. However, once deployed on Azure, I encounter this problem with writing the video :

    


    Failed: [Errno 32] Broken pipe MoviePy error: FFMPEG encountered the following error while writing file /tmp/tmp81o22bka.mp4: b'' Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 136, in write_frame self.proc.stdin.write(img_array.tobytes()) BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/site/wwwroot/function_app.py", line 102, in generate_simple_video image_clip.write_videofile(temp_file_name, fps=24) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 54, in requires_duration return f(clip, *a, **k) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 135, in use_clip_fps_by_default return f(clip, *new_a, **new_kw) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB return f(clip, *a, **k) File "/usr/local/lib/python3.8/site-packages/moviepy/video/VideoClip.py", line 300, in write_videofile ffmpeg_write_video(self, filename, fps, codec, File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 228, in ffmpeg_write_video writer.write_frame(frame) File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 180, in write_frame raise IOError(error) OSError: [Errno 32] Broken pipe MoviePy error: FFMPEG encountered the following error while writing file /tmp/tmp81o22bka.mp4: b''


    


    From my online research, everyone suggests that it’s a resource issue (lack of RAM and CPU). I increased these resources in the Azure Function App configuration, but I still face the same problem.

    


    I took a step-by-step approach to check for compatibility issues with the MoviePy function. I created a small 30-second video without audio, and it worked. Then, I added more options such as more images, audio, etc., but it failed.

    


    I suspected a timeout issue knowing that azure function app has a 5 min timeout that can be increased to 10 min in the consumption plan, but even with an execution time of one minute only it still fails.

    


    I am out of ideas to test and really need help.

    


    Thank you in advance.

    


  • How to perform Video Trimming & Merging Performance in Mobile App ? [closed]

    5 mai 2024, par Harpreet Singh.8052

    I am in the process of developing an app that will allow users to trim and merge recorded videos similar to Snapchat and Instagram stories. Currently, I am using FFmpeg for video processing, but it is taking too long to complete. I have come up with an idea to use ExoPlayer to minimize processing time. My plan is to store the trimmed duration of multiple videos and only play the trimmed part when the video is played. ExoPlayer allows playing multiple videos seamlessly, so it will appear as if the videos have been trimmed and merged. However, I am uncertain about how to handle video playback on the server-side when a user uploads all the videos and metadata about the video, such as the trimmed part from where to where they want to play. If I follow this approach, I will need to first get all the videos to the user's device before playback, but the videos can be long, ranging from 10 to 30 minutes, making this approach impractical. I would like to play them in a video stream manner. 

    


    I am also curious about how other video editing apps such as CapCut or Kinemaster handle video processing tasks like trimming, merging, and slow-mo. What video processing tools do they use ? When I tried merging ten one-minute videos, it took around 10 minutes on my Android phone, but these editing apps take less time. 

    


    I would appreciate any approach or idea to improve the processing time for video trimming and merging

    


  • How to improve Video Trimming & Merging Performance for Mobile App [closed]

    5 mai 2024, par Harpreet Singh.8052

    I am in the process of developing an app that will allow users to trim and merge recorded videos similar to Snapchat and Instagram stories. Currently, I am using FFmpeg for video processing, but it is taking too long to complete. I have come up with an idea to use ExoPlayer to minimize processing time. My plan is to store the trimmed duration of multiple videos and only play the trimmed part when the video is played. ExoPlayer allows playing multiple videos seamlessly, so it will appear as if the videos have been trimmed and merged. However, I am uncertain about how to handle video playback on the server-side when a user uploads all the videos and metadata about the video, such as the trimmed part from where to where they want to play. If I follow this approach, I will need to first get all the videos to the user's device before playback, but the videos can be long, ranging from 10 to 30 minutes, making this approach impractical. I would like to play them in a video stream manner. 

    


    I am also curious about how other video editing apps such as CapCut or Kinemaster handle video processing tasks like trimming, merging, and slow-mo. What video processing tools do they use ? When I tried merging ten one-minute videos, it took around 10 minutes on my Android phone, but these editing apps take less time. 

    


    I would appreciate any approach or idea to improve the processing time for video trimming and merging