
Recherche avancée
Médias (1)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (107)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (16274)
-
I need my music bot to play on multiple servers at the same time
12 mars 2024, par OndoshI'm writing my music bot in Python and I want it to be able to play music on multiple servers at once. Now the attempt to play on several servers looks like this : I start music on the first server, go to the second and start other music there, but the one that was requested on the first one is playing.
Know my code is :


from nextcord.ext import commands
import nextcord, random
import yt_dlp
import datetime
import lazy_queue as lq

FFMPEG_OPTIONS = {
 "before_options": "-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5",
 'options': '-vn'}

YDL_OPTIONS = {
 'format': 'bestaudio/best',
 'extractaudio': True,
 'noplaylist': True,
 'keepvideo': False,
 'postprocessors': [{
 'key': 'FFmpegExtractAudio',
 'preferredcodec': 'mp3',
 'preferredquality': '320'
 }]
}

bot.remove_command("help")
songs_queue = lq.Queue()
loop_flag = False

@bot.command()
async def add(ctx, *url):
 url = ' '.join(url)
 with yt_dlp.YoutubeDL(YDL_OPTIONS) as ydl:
 try:
 info = ydl.extract_info(url, download=False)
 except:
 info = ydl.extract_info(f"ytsearch:{url}",
 download=False)['entries'][0]
 URL = info['url']
 name = info['title']
 time = str(datetime.timedelta(seconds=info['duration']))
 songs_queue.q_add([name, time, URL])
 embed = nextcord.Embed(description=f'Записываю [{name}]({url}) в очередь 📝',
 colour=nextcord.Colour.red())
 await ctx.message.reply(embed=embed)


def step_and_remove(voice_client):
 if loop_flag:
 songs_queue.q_add(songs_queue.get_value()[0])
 songs_queue.q_remove()
 audio_player_task(voice_client)


def audio_player_task(voice_client):
 if not voice_client.is_playing() and songs_queue.get_value():
 voice_client.play(nextcord.FFmpegPCMAudio(
 executable="ffmpeg\\bin\\ffmpeg.exe",
 source=songs_queue.get_value()[0][2],
 **FFMPEG_OPTIONS),
 after=lambda e: step_and_remove(voice_client))


@bot.command()
async def play(ctx, *url):
 await join(ctx)
 await add(ctx, ' '.join(url))
 await ctx.message.add_reaction(emoji='🎸')
 voice_client = ctx.guild.voice_client
 audio_player_task(voice_client)
async def queue(ctx):
 if len(songs_queue.get_value()) > 0:
 only_names_and_time_queue = []
 for i in songs_queue.get_value():
 name = i[0]
 if len(i[0]) > 30:
 name = i[0][:30] + '...'
 only_names_and_time_queue.append(f'📀 `{name:<33} {i[1]:>20}`\n')
 c = 0
 queue_of_queues = []
 while c < len(only_names_and_time_queue):
 queue_of_queues.append(only_names_and_time_queue[c:c + 10])
 c += 10
 embed = nextcord.Embed(title=f'ОЧЕРЕДЬ [LOOP: {loop_flag}]',
 description=''.join(queue_of_queues[0]),
 colour=nextcord.Colour.red())
 await ctx.send(embed=embed)
 for i in range(1, len(queue_of_queues)):
 embed = nextcord.Embed(description=''.join(queue_of_queues[i]),
 colour=nextcord.Colour.red())
 await ctx.send(embed=embed)
 else:
 await ctx.send('Очередь пуста')



As far as I understand, I need to create variables that will be associated with the server ID, but I do not know how to do this.


-
when ffpmeg drops frames some things aren't played back in real time
8 février 2024, par Alex028502I am trying to run a bunch of ffpmeg processes that act as simulators for cameras, and something funny is happening when I the processor can't keep up with the configured frame rate.


I have replaced the rtsp stream with an output file, and managed to reproduce the issue, so will just show that to keep it simple.


First here is a makefile that creates my source movie :


clock.mp4: Makefile
 rm -f $@
 ffmpeg -f lavfi -i color=c=black:s=4096x2160:r=25 -vf \
"drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:fontsize=72:fontcolor=white:x=(w-text_w)/2:y=(h-text_h)/2: \
text='%{eif\:trunc(n/25)\:d}':start_number=0:rate=25" \
-t 60 -r 25 $@



that gives me a one minute long movie that prints the second number to the screen. I have tested it out and the seconds are close enough. I put a lot of pixels to make it easier to jam up my CPU.


Here is the script that creates a processes similar to the one I am trying to debug (called
experiment.sh
)

I am actually using H.264, but H.265 is easier to overwhelm the processor with


#! /usr/bin/env bash

set -e

echo starting > message$1.txt

rm -f superclock$1.mp4
ffmpeg -re -stream_loop -1 -i clock.mp4 \
 -an -vcodec libx265 -preset ultrafast -sc_threshold -1 -x265-params repeat-headers=1 \
 -vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:fontsize=24:fontcolor=white:x=10:y=10:text='%{localtime\:%X}', \
 drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:fontsize=24:fontcolor=white:x=10:y=(h-text_h-10):textfile=message$1.txt:reload=1, \
 scale=1920x1080,fps=25" \
 -b:v 3M -minrate 3M -maxrate 3M \
 -bufsize 6M -g 25 superclock$1.mp4 &
pid=$!

for x in $(seq 0 10)
do
 echo $x > message$1.txt
 sleep 10
done

kill -INT $pid || true



It should


- 

- put the second in the middle of the screen - 'cause it gets it from the source video
- put the approximate sixth of minute in the lower left corner
(only approximate because of the sleep but close enough)
- put the the wall clock time in the upper left corner








and it works


make clock.mpg
./experiment.sh 0
vlc superclock0.mp4



shows something like this



Now here is the interesting part


If I run the script in four different terminals at the same time


./experiment.sh 1
./experiment.sh 2
./experiment.sh 3
./experiment.sh 4



It can't keep up with the frame rate, and I see this in the output :


frame= 1515 fps= 16 q=0.0 size= 768kB time=00:00:59.96 bitrate= 104.9k



I was hoping the end result would all look ok when I watch it except with fewer frames, but the timestamps of the frames would make it all work as expected


However...


- 

- The time in the middle, that is inherited from the source video, the seconds in the middle of the screen, stays in sync with VLC's clock.
- the wall clock in the upper left seems play at 150% speed
- the every ten seconds incrementor in the lower left seems to increment every 7 seconds
- the video is only 1:25 long even though it was recording for at least 1:40 according to sleeps
- the wall clock in the upper right hand corner makes it more than 1'40" and then counter in the lower left makes it to 10.














Here are four states to compare


| | Start | 30" in | end |
|------------------+----------+----------|----------|
| Video Time | 00:00 | 00:30 | 01:24 |
| Wall Clock Time | 15:05:50 | 15:06:39 | 15:07:39 |
| sixth of minute | 0 | 4 | 10 |
| seconds counter | 0 | 30 | 24 |





So you can see the vlc clock keeps pace with the original clock from the source movie.. even when it is only able to produce frames at 2/3 of the rate. However, the it is taking 50% long to get through the whole source movie I guess ?


I am having trouble coming up with a theory that can explain exactly how this happens.


Does anybody know how I can "correct" this ? (make it so that the movie is played at the same rate that it is recorded)


I am thinking of using a lower frame rate and size as the input video.. but it would be nice to have something that will always work as expected, just with a lower frame rate, no matter how busy the processor is.


-
Use ffmpeg to record from RTSP stream to two outputs at different resolutions while segmenting the video based on time
7 juin 2023, par Eric HansenI need to record from an RTSP at two different resolutions (original and something lower). I also need to have the files broken up according to clock time on every even 30 minutes, ex. 1:00, 1:30, 2:00, etc.


I imagine the command would look like this.


ffmpeg -rtsp_transport tcp -i rtsp://<rtsp url="url"> \
 -filter_complex "[0:v]split=2[v1][v2];[v1]scale=1920:1080[out1];[v2]scale=448:252[out2]" \
 -map "[out1]" -c:v -f segment -segment_format mp4 -segment_time 00:30:00 -strftime 1 \ 
 /mnt/data/original-%Y-%m-%d-%H.%M.%S.mp4 \
 -map "[out2]" -c:v -f segment -segment_format mp4 -segment_time 00:30:00 -strftime 1 \ 
 /mnt/data/low-reso-%Y-%m-%d-%H.%M.%S.mp4 \
</rtsp>


Of course, this doesn't work. The command above gives me this error.


[NULL @ 0x556c999c2140] Unable to find a suitable output format for 'segment'
segment: Invalid argument



I've tried so many perturbations of this command to get it to work without any luck. Does anyone know how to do this ?