
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (77)
-
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...) -
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (10600)
-
ffmpeg makes video with no sound on video.js
21 septembre 2023, par Laurent BI have this code that creates m3u8 file to stream a mkv file after transcoding.


ffmpeg.setFfmpegPath(ffmpegPath);
childProcess = ffmpeg()
 .input(inputFilePath)
 // .native()
 .inputOptions(['-y', '-progress', 'pipe:1'])
 .outputOptions(['-b:v 1M', '-hls_time 2', '-hls_list_size 0', '-hls_segment_size 500000'])
 .output('public/output.m3u8')
 .on('end', () => {
 io.emit('conversionComplete', { percent: 100, time: totalDuration, totalDuration, timemark: millisecondsToTimeString(totalDuration) });
 childProcess = null
 })
 .on('error', (error) => {
 io.emit('conversionError', { error });
 childProcess = null
 })
 .on('progress', (progress) => {
 io.emit('conversionProgress', { ...progress, time: timeStringToMilliseconds(progress.timemark), totalDuration });
 });

childProcess.run();



The m3u8 is readable by VLC Player and video.js and can be casted to Chrome Cast. For some others, 4Go files moreless, it works only when the video is transcoded without sound with option .inputOption('-an').


Here's the content of m3u8


#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:4
#EXTINF:10.000000,
output4.ts
#EXTINF:10.000000,
output5.ts
#EXTINF:8.640000,
output6.ts
#EXTINF:10.000000,
output7.ts
#EXTINF:10.000000,
output8.ts



- 

- I tried with all possible audio codecs provided by ffmpeg
- I tried with bigger/smaller parts for ts files
- I tried with other bitrate








Thanks in advance for your ideas


-
Accented characters are not recognized in python [closed]
10 avril 2023, par CorAnnaI have a problem in the python script, my script should put subtitles in a video given a srt file, this srt file is written by another script but in its script it replaces the accents and all the particular characters with a black square symbol with a question mark inside it... the problem I think lies in the writing of this file, what follows and that in overwriting the subtitles I do with ffmpeg the sentences that contain an accented word are not written


def video_audio_file_writer(video_file):

 videos_folder = "Video"
 audios_folder = "Audio"

 video_path = f"{videos_folder}\\{video_file}"

 video_name = Path(video_path).stem
 audio_name = f"{video_name}Audio"

 audio_path = f"{audios_folder}\\{audio_name}.wav"

 video = mp.VideoFileClip(video_path)
 audio = video.audio.write_audiofile(audio_path)

 return video_path, audio_path, video_name

 def audio_file_transcription(audio_path, lang):

 model = whisper.load_model("base")
 tran = gt.Translator()

 audio_file = str(audio_path)

 options = dict(beam_size=5, best_of=5)
 translate = dict(task="translate", **options)
 result = model.transcribe(audio_file, **translate) 

 return result

def audio_subtitles_transcription(result, video_name):

 subtitle_folder = "Content"
 subtitle_name = f"{video_name}Subtitle"
 subtitle_path_form = "srt"

 subtitle_path = f"{subtitle_folder}\\{subtitle_name}.{subtitle_path_form}"

 with open(os.path.join(subtitle_path), "w") as srt:
 # write_vtt(result["segments"], file=vtt)
 write_srt(result["segments"], file=srt)
 
 return subtitle_path

def video_subtitles(video_path, subtitle_path, video_name):

 video_subtitled_folder = "VideoSubtitles"
 video_subtitled_name = f"{video_name}Subtitles"
 video_subtitled_path = f"{video_subtitled_folder}\\{video_subtitled_name}.mp4"

 video_path_b = bytes(video_path, 'utf-8')
 subtitle_path_b = bytes(subtitle_path, 'utf-8')
 video_subtitled_path_b = bytes(video_subtitled_path, 'utf-8')

 path_abs_b = os.getcwdb() + b"\\"

 path_abs_bd = path_abs_b.decode('utf-8')
 video_path_bd= video_path_b.decode('utf-8')
 subtitle_path_bd = subtitle_path_b.decode('utf-8')
 video_subtitled_path_bd = video_subtitled_path_b.decode('utf-8')

 video_path_abs = str(path_abs_bd + video_path_bd)
 subtitle_path_abs = str(path_abs_bd + subtitle_path_bd).replace("\\", "\\\\").replace(":", "\\:")
 video_subtitled_path_abs = str(path_abs_bd + video_subtitled_path_bd)

 time.sleep(3)

 os.system(f"ffmpeg -i {video_path_abs} -vf subtitles='{subtitle_path_abs}' -y {video_subtitled_path_abs}")

 return video_subtitled_path_abs, video_path_abs, subtitle_path_abs

if __name__ == "__main__":

 video_path, audio_path, video_name = video_audio_file_writer(video_file="ChiIng.mp4")
 result = audio_file_transcription(audio_path=audio_path, lang="it")
 subtitle_path = audio_subtitles_transcription(result=result, video_name=video_name)
 video_subtitled_path_abs, video_path_abs, subtitle_path_abs = video_subtitles(video_path=video_path, subtitle_path=subtitle_path, video_name=video_name)
 
 print("Video Subtitled")



Windows 11
Python 3.10


-
Bit-field badness
30 janvier 2010, par Mans — Compilers, OptimisationConsider the following C code which is based on an real-world situation.
struct bf1_31 unsigned a:1 ; unsigned b:31 ; ;
void func(struct bf1_31 *p, int n, int a)
int i = 0 ;
do
if (p[i].a)
p[i].b += a ;
while (++i < n) ;
How would we best write this in ARM assembler ? This is how I would do it :
func : ldr r3, [r0], #4 tst r3, #1 add r3, r3, r2, lsl #1 strne r3, [r0, #-4] subs r1, r1, #1 bgt func bx lr
The
add
instruction is unconditional to avoid a dependency on the comparison. Unrolling the loop would mask the latency of theldr
instruction as well, but that is outside the scope of this experiment.Now compile this code with
gcc -march=armv5te -O3
and watch in horror :func : push r4 mov ip, #0 mov r4, r2 loop : ldrb r3, [r0] add ip, ip, #1 tst r3, #1 ldrne r3, [r0] andne r2, r3, #1 addne r3, r4, r3, lsr #1 orrne r2, r2, r3, lsl #1 strne r2, [r0] cmp ip, r1 add r0, r0, #4 blt loop pop r4 bx lr
This is nothing short of awful :
- The same value is loaded from memory twice.
- A complicated mask/shift/or operation is used where a simple shifted add would suffice.
- Write-back addressing is not used.
- The loop control counts up and compares instead of counting down.
- Useless
mov
in the prologue ; swapping the roles orr2
andr4
would avoid this. - Using
lr
in place ofr4
would allow the return to be done withpop {pc}
, saving one instruction (ignoring for the moment that no callee-saved registers are needed at all).
Even for this trivial function the gcc-generated code is more than twice the optimal size and slower by approximately the same factor.
The main issue I wanted to illustrate is the poor handling of bit-fields by gcc. When accessing bitfields from memory, gcc issues a separate load for each field even when they are contained in the same aligned memory word. Although each load after the first will most likely hit L1 cache, this is still bad for several reasons :
- Loads have typically two or three cycles result latency compared to one cycle for data processing instructions. Any bit-field can be extracted from a register with two shifts, and on ARM the second of these can generally be achieved using a shifted second operand to a following instruction. The ARMv6T2 instruction set also adds the
SBFX
andUBFX
instructions for extracting any signed or unsigned bit-field in one cycle. - Most CPUs have more data processing units than load/store units. It is thus more likely for an ALU instruction than a load/store to issue without delay on a superscalar processor.
- Redundant memory accesses can trigger early flushing of store buffers rendering these less efficient.
No gcc bashing is complete without a comparison with another compiler, so without further ado, here is the ARM RVCT output (
armcc --cpu 5te -O3
) :func : mov r3, #0 push r4, lr loop : ldr ip, [r0, r3, lsl #2] tst ip, #1 addne ip, ip, r2, lsl #1 strne ip, [r0, r3, lsl #2] add r3, r3, #1 cmp r3, r1 blt loop pop r4, pc
This is much better, the core loop using only one instruction more than my version. The loop control is counting up, but at least this register is reused as offset for the memory accesses. More remarkable is the push/pop of two registers that are never used. I had not expected to see this from RVCT.
Even the best compilers are still no match for a human.