
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (70)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (6320)
-
Broken pipe error when writing video file with moviepy in azure [closed]
6 juin 2024, par LydiaI have a program that retrieves images (png) and audio files from Azure Blob Storage to merge them into a video, which is then written to a temporary file and saved back to Blob Storage. I'm coding in Python and here is code i use to do that :


def merge_image_audio(azure_connection_string):
 """Merge PNG with mp3 files."""
 # get Azure Container Client 
 container_client = get_container_client(azure_connection_string, AZURE_CONTAINER_NAME)
 
 # List blobs in the "temp" folder
 blob_list = container_client.list_blobs(name_starts_with="temp/")
 # Initialize lists to store blob data
 image_blob_data_list = []
 audio_blob_data_list = []
 
 # Download PNG and MP3 files and store blob data in the lists
 for blob in blob_list:
 if blob.name.endswith('.png'):
 image_blob_data_list.append(get_blob(azure_connection_string, blob, '.png'))
 
 elif blob.name.endswith(".mp3"):
 audio_blob_data_list.append(get_blob(azure_connection_string, blob, '.mp3'))
 
 clips = []
 # Merge images and audio files
 for image, audio in zip(image_blob_data_list, audio_blob_data_list):
 image_clip = ImageClip(image).set_duration(AudioFileClip(audio).duration)
 image_clip = image_clip.set_audio(AudioFileClip(audio))
 clips.append(image_clip)
 
 # concatenate all clips 
 final_clip = concatenate_videoclips(clips)
 
 with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as video_temp_file:
 try: 
 temp_file_name = video_temp_file.name
 final_clip.write_videofile(temp_file_name, fps=24, codec='libx264', audio_codec='mp3')
 except OSError as e :
 logging.error(f"Failed: {e}", exc_info=True)
 
 
 current_datetime = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M")
 final_video_name = current_datetime + FINAL_VIDEO
 tmp_blob_client = get_blob_client(azure_connection_string, AZURE_CONTAINER_NAME, final_video_name)
 with open(temp_file_name, 'rb') as video_data:
 tmp_blob_client.upload_blob(video_data, overwrite=True)
 video_temp_file.close()



I have containerized my code, and the Docker image works perfectly on my machine. However, once deployed on Azure, I encounter this problem with writing the video :


Failed: [Errno 32] Broken pipe MoviePy error: FFMPEG encountered the following error while writing file /tmp/tmp81o22bka.mp4: b'' Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 136, in write_frame self.proc.stdin.write(img_array.tobytes()) BrokenPipeError: [Errno 32] Broken pipe During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/site/wwwroot/function_app.py", line 102, in generate_simple_video image_clip.write_videofile(temp_file_name, fps=24) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 54, in requires_duration return f(clip, *a, **k) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 135, in use_clip_fps_by_default return f(clip, *new_a, **new_kw) File "", line 2, in write_videofile File "/usr/local/lib/python3.8/site-packages/moviepy/decorators.py", line 22, in convert_masks_to_RGB return f(clip, *a, **k) File "/usr/local/lib/python3.8/site-packages/moviepy/video/VideoClip.py", line 300, in write_videofile ffmpeg_write_video(self, filename, fps, codec, File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 228, in ffmpeg_write_video writer.write_frame(frame) File "/usr/local/lib/python3.8/site-packages/moviepy/video/io/ffmpeg_writer.py", line 180, in write_frame raise IOError(error) OSError: [Errno 32] Broken pipe MoviePy error: FFMPEG encountered the following error while writing file /tmp/tmp81o22bka.mp4: b''



From my online research, everyone suggests that it’s a resource issue (lack of RAM and CPU). I increased these resources in the Azure Function App configuration, but I still face the same problem.


I took a step-by-step approach to check for compatibility issues with the MoviePy function. I created a small 30-second video without audio, and it worked. Then, I added more options such as more images, audio, etc., but it failed.


I suspected a timeout issue knowing that azure function app has a 5 min timeout that can be increased to 10 min in the consumption plan, but even with an execution time of one minute only it still fails.


I am out of ideas to test and really need help.


Thank you in advance.


-
aacdec_usac : rename spectrum decode function and remove unused arg
12 juin 2024, par Lynne -
Streaming playlist with browser overlay [closed]
28 juin 2024, par TchouneDo you have any idea how I can stream a video playlist on twitch (with ffmpeg or another lib) and overlay a web page (with sub twitch alerts for example).


I also need to be aware that my system needs to be multi-user. A user can stream on 1 to n different twitch channels. (multi instance).


For my production, I plan to use linux server without GUI. I've been looking for a solution for 4 months, but I've run out of ideas.


I've already tried xvfb to create a virtual desktop and display a chorimum browser, but it's not effective for production.
I've tried the whole pupeertee thing but it's not usable either.


And my backend server is under nodejs with adonisjs.
I'm currently using ffmpeg to broadcast a video playlist with m3u8 :


startStream(): number {
let parameters = [
 '-nostdin',
 '-re',
 '-f',
 'concat',
 '-safe',
 '0',
 '-vsync',
 'cfr',
 '-i',
 `concat:${app.publicPath(this.timelinePath)}`,
]

let filterComplex = ''

if (this.logo) {
 parameters.push('-i', app.publicPath(this.logo))
 filterComplex += '[1:v]scale=200:-1[logo];[0:v][logo]overlay=W-w-5:5[main];'
} else {
 filterComplex += '[0:v]'
}

if (this.overlay) {
 parameters.push('-i', app.publicPath(this.overlay))
 filterComplex += '[2:v]scale=-1:ih[overlay];[main][overlay]overlay=0:H-h[main];'
}

filterComplex += `[main]drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:textfile=${app.publicPath(this.guestFile)}:reload=1:x=(w-text_w)/2:y=h-text_h-10:fontsize=18:fontcolor=white[main]; [main]drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:text='%{localtime\\:%X}':x=10:y=h-text_h-10:fontsize=16:fontcolor=white`

parameters.push(
 '-filter_complex',
 filterComplex,
 '-copyts',
 '-pix_fmt',
 'yuv420p',
 '-s',
 '1920x1080',
 '-c:v',
 'libx264',
 '-profile:v',
 'high',
 '-preset',
 'veryfast',
 '-b:v',
 '6000k',
 '-maxrate',
 '7000k',
 '-minrate',
 '5000k',
 '-bufsize',
 '9000k',
 '-g',
 '120',
 '-r',
 '60',
 '-c:a',
 'aac',
 '-f',
 'flv',
 `${this.baseUrl}/${encryption.decrypt(this.streamKey)}`
)

this.instance = spawn('ffmpeg', parameters, {
 detached: true,
 stdio: ['ignore', 'pipe', 'pipe'],
})



I've thought of using Webrtc, but it doesn't seem to meet my needs.


I know that Gstreamer has wpeWebKit or wpesrc to do this, but there's no nodejs wrapper and above all it doesn't take playlist input (m3u8 or txt) into account...


If anyone has any new ideas, I'd be very grateful.