
Recherche avancée
Autres articles (104)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (11662)
-
ffmpeg trouble with NetMaui and Android [closed]
15 juin 2024, par Billy VanegasI find myself in a following predicament. While I work with Visual Studio 2022, ffmpeg performs flawlessly on Windows platform. However, I'm encountering difficulties when attempting to replicate this functionality on Android. Despite exploring NuGet packages like FFMpegCore, which seem to be ffmpeg wrappers but lack ffmpeg itself, I'm still struggling to find a clear path forward. I've even tried integrating ffmpeg-kit for Android as per the instructions, only to face repeated failures and a sense of confusion. I must admit my puzzlement : why isn't there a straightforward method to seamlessly add ffmpeg to a .NET MAUI project that functions consistently across both iOS and Android platforms ?


I want to convert MP3 files to WAV format using FFmpeg on the Android platform within a .NET MAUI project.


I am using the FFmpegCore library and have downloaded the FFmpeg binaries from the official FFmpeg website. However, I encountered issues when attempting to pass the binary folder on the emulator, when I try to pass it, I get
permission denied
in working folder :
/data/user/0/com.companyname.projectname/files/ffmpeg
on this part of code :

await FFMpegArguments
 .FromFileInput(mp3Path)
 .OutputToFile(wavPath, true, options => options
 .WithAudioCodec("pcm_s16le")
 .WithAudioSamplingRate(44100)
 .WithAudioBitrate(320000)
 )
 .ProcessAsynchronously();



This is the
AndroidManifest.xml


<?xml version="1.0" encoding="utf-8"?>
<manifest>
 <application></application>
 
 
 
 
</manifest>



private async Task ConvertMp3ToWav(string mp3Path, string wavPath)
 {
 try
 {
 var directory = Path.GetDirectoryName(wavPath);
 if (!Directory.Exists(directory))
 Directory.CreateDirectory(directory!);
 if (!File.Exists(wavPath))
 Console.WriteLine($"File not found {wavPath}, creating empty file.");
 using var fs = new FileStream(wavPath, FileMode.CreateNew);
 if (!File.Exists(mp3Path))
 Console.WriteLine($"File not found {mp3Path}");

 string? ffmpegBinaryPath = await ExtractFFmpegBinaries(Platform.AppContext);
 FFMpegCore.GlobalFFOptions.Configure(new FFOptions { BinaryFolder = Path.GetDirectoryName(ffmpegBinaryPath!)! });

 await FFMpegArguments
 .FromFileInput(mp3Path)
 .OutputToFile(wavPath, true, options => options
 .WithAudioCodec("pcm_s16le")
 .WithAudioSamplingRate(44100)
 .WithAudioBitrate(320000)
 )
 .ProcessAsynchronously();
 }
 catch (Exception ex)
 {
 Console.WriteLine($"An error occurred during the conversion process: {ex.Message}");
 throw;
 }
 }



private async Task<string> ExtractFFmpegBinaries(Context context)
 {
 var architectureFolder = "x86"; // "armeabi-v7a";
 var ffmpegBinaryName = "ffmpeg"; 
 var ffmpegBinaryPath = Path.Combine(context.FilesDir!.AbsolutePath, ffmpegBinaryName);
 var tempFFMpegFileName = Path.Combine(FileSystem.AppDataDirectory, ffmpegBinaryName);

 if (!File.Exists(ffmpegBinaryPath))
 {
 try
 {
 var assetPath = $"Libs/{architectureFolder}/{ffmpegBinaryName}";
 using var assetStream = context.Assets!.Open(assetPath);
 
 await using var tempFFMpegFile = File.OpenWrite(tempFFMpegFileName);
 await assetStream.CopyToAsync(tempFFMpegFile);

 //new MainActivity().RequestStoragePermission();
 Java.Lang.Runtime.GetRuntime()!.Exec($"chmod 755 {tempFFMpegFileName}");
 }
 catch (Exception ex)
 {
 Console.WriteLine($"An error occurred while extracting FFmpeg binaries: {ex.Message}");
 throw;
 }
 }
 else
 {
 Console.WriteLine($"FFmpeg binaries already extracted to: {ffmpegBinaryPath}");
 }

 return tempFFMpegFileName!;
 }
</string>


-
Problems with Python's azure.cognitiveservices.speech when installing together with FFmpeg in a Linux web app
15 mai 2024, par Kakobo kakoboI need some help.
I'm building an web app that takes any audio format, converts into a .wav file and then passes it to 'azure.cognitiveservices.speech' for transcription.I'm building the web app via a container Dockerfile as I need to install ffmpeg to be able to convert non ".wav" audio files to ".wav" (as azure speech services only process wav files). For some odd reason, the 'speechsdk' class of 'azure.cognitiveservices.speech' fails to work when I install ffmpeg in the web app. The class works perfectly fine when I install it without ffpmeg or when i build and run the container in my machine.


I have placed debug print statements in the code. I can see the class initiating, for some reason it does not buffer in the same when when running it locally in my machine. The routine simply stops without any reason.


Has anybody experienced a similar issue with azure.cognitiveservices.speech conflicting with ffmpeg ?


Here's my Dockerfile :


# Use an official Python runtime as a parent imageFROM python:3.11-slim

#Version RunRUN echo "Version Run 1..."

Install ffmpeg

RUN apt-get update && apt-get install -y ffmpeg && # Ensure ffmpeg is executablechmod a+rx /usr/bin/ffmpeg && # Clean up the apt cache by removing /var/lib/apt/lists saves spaceapt-get clean && rm -rf /var/lib/apt/lists/*

//Set the working directory in the container

WORKDIR /app

//Copy the current directory contents into the container at /app

COPY . /app

//Install any needed packages specified in requirements.txt

RUN pip install --no-cache-dir -r requirements.txt

//Make port 80 available to the world outside this container

EXPOSE 8000

//Define environment variable

ENV NAME World

//Run main.py when the container launches

CMD ["streamlit", "run", "main.py", "--server.port", "8000", "--server.address", "0.0.0.0"]`and here's my python code:



def transcribe_audio_continuous_old(temp_dir, audio_file, language):
 speech_key = azure_speech_key
 service_region = azure_speech_region

 time.sleep(5)
 print(f"DEBUG TIME BEFORE speechconfig")

 ran = generate_random_string(length=5)
 temp_file = f"transcript_key_{ran}.txt"
 output_text_file = os.path.join(temp_dir, temp_file)
 speech_recognition_language = set_language_to_speech_code(language)
 
 speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
 speech_config.speech_recognition_language = speech_recognition_language
 audio_input = speechsdk.AudioConfig(filename=os.path.join(temp_dir, audio_file))
 
 speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input, language=speech_recognition_language)
 done = False
 transcript_contents = ""

 time.sleep(5)
 print(f"DEBUG TIME AFTER speechconfig")
 print(f"DEBUG FIle about to be passed {audio_file}")

 try:
 with open(output_text_file, "w", encoding=encoding) as file:
 def recognized_callback(evt):
 print("Start continuous recognition callback.")
 print(f"Recognized: {evt.result.text}")
 file.write(evt.result.text + "\n")
 nonlocal transcript_contents
 transcript_contents += evt.result.text + "\n"

 def stop_cb(evt):
 print("Stopping continuous recognition callback.")
 print(f"Event type: {evt}")
 speech_recognizer.stop_continuous_recognition()
 nonlocal done
 done = True
 
 def canceled_cb(evt):
 print(f"Recognition canceled: {evt.reason}")
 if evt.reason == speechsdk.CancellationReason.Error:
 print(f"Cancellation error: {evt.error_details}")
 nonlocal done
 done = True

 speech_recognizer.recognized.connect(recognized_callback)
 speech_recognizer.session_stopped.connect(stop_cb)
 speech_recognizer.canceled.connect(canceled_cb)

 speech_recognizer.start_continuous_recognition()
 while not done:
 time.sleep(1)
 print("DEBUG LOOPING TRANSCRIPT")

 except Exception as e:
 print(f"An error occurred: {e}")

 print("DEBUG DONE TRANSCRIPT")

 return temp_file, transcript_contents



The transcript this callback works fine locally, or when installed without ffmpeg in the linux web app. Not sure why it conflicts with ffmpeg when installed via container dockerfile. The code section that fails can me found on note #NOTE DEBUG"


-
ffmpeg concatenation with -filter_complex
16 octobre 2018, par IgniterI’ve seen several similar questions but none of them actually helped in my case.
Getting this error while trying to join 1 audio and 4 video files of different nature and resolutions.ffmpeg -i 0.mp3 -i 1.mp4 -i 2.mkv -i 3.mkv -i 4.webm \
-filter_complex [0:a:0][1:v:0][2:v:0][3:v:0][4:v:0]concat=n=5:v=1:a=1[outv][outa] \
-map "[outv]" -map "[outa]" output.mp4All this gives the following error :
Stream specifier ':a:0' in filtergraph description [0:a:0][1:v:0][2:v:0][3:v:0][4:v:0]concat=n=5:v=1:a=1[outv][outa] matches no streams.
Straight concatenation
-i "concat:0.mp3|1.mp4..."
also doesn’t work as expected due to different resolutions and video formats. All methods syntax was taken from official documentation but there should be something that I’ve missed here.Full output log :
ffmpeg version 3.4.4-0ubuntu0.18.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Input #0, mp3, from 'mp3/10.mp3':
Metadata:
album_artist : artist
title : title
artist : 10
album : 12
track : 1
VideoKind : 2
date : 2009
Duration: 00:06:00.44, start: 0.025056, bitrate: 64 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 64 kb/s
Metadata:
encoder : LAME3.98r
Stream #0:1: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 200x200 [SAR 72:72 DAR 1:1], 90k tbr, 90k tbn, 90k tbc
Metadata:
comment : Cover (front)
Input #1, matroska,webm, from '1.mp4':
Metadata:
MINOR_VERSION : 0
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
ENCODER : Lavf57.83.100
Duration: 00:01:53.05, start: 0.007000, bitrate: 2292 kb/s
Stream #1:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:01:53.048000000
Input #2, matroska,webm, from '2.mkv':
Metadata:
MINOR_VERSION : 0
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
ENCODER : Lavf57.83.100
Duration: 00:02:08.09, start: 0.007000, bitrate: 1607 kb/s
Stream #2:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:02:08.090000000
Input #3, matroska,webm, from '3.mkv':
Metadata:
MINOR_VERSION : 0
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
ENCODER : Lavf57.83.100
Duration: 00:01:37.05, start: 0.007000, bitrate: 3525 kb/s
Stream #3:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:01:37.048000000
Input #4, matroska,webm, from '4.webm':
Metadata:
MINOR_VERSION : 0
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
ENCODER : Lavf57.83.100
Duration: 00:01:45.13, start: 0.007000, bitrate: 3685 kb/s
Stream #4:0: Video: h264 (High), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24 fps, 24 tbr, 1k tbn, 48 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:01:45.131000000
Stream specifier ':a:0' in filtergraph description [0:a:0][1:v:0][2:v:0][3:v:0][4:v:0]concat=n=5:v=1:a=1[outv][outa] matches no streams.