
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (73)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (13591)
-
How do I make my discord bot play music by using youtubedl's search function instead of url ? (Python)
28 septembre 2021, par PypypieYumI want it to search for the video and play it, how can i change the following code to achieve that ? Every time I use the ytsearch function in ytdl, I notice that it only searches for the first word of the title and download it, however, it causes error later on and do nothing.


@commands.command()
 async def play(self, ctx, url):
 if ctx.author.voice is None:
 await ctx.send("You are not in a voice channel!")
 voice_channel = ctx.author.voice.channel
 if ctx.voice_client is None:
 await voice_channel.connect()
 else:
 await ctx.voice_client.move_to(voice_channel)

 ctx.voice_client.stop()
 FFMPEG_OPTIONS = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', 'options': '-vn'}
 YDL_OPTIONS = {'format':"bestaudio", 'default_search':"ytsearch"}
 vc = ctx.voice_client

 with youtube_dl.YoutubeDL(YDL_OPTIONS) as ydl:
 info = ydl.extract_info(url, download=False)
 if 'entries' in info:
 url2 = info["entries"][0]["formats"][0]
 elif 'formats' in info:
 url2 = info["formats"][0]['url']
 source = await discord.FFmpegOpusAudio.from_probe(url2, **FFMPEG_OPTIONS)
 vc.play(source)



And this is the error message :


Ignoring exception in command play:
Traceback (most recent call last):
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/core.py", line 85, in wrapped
 ret = await coro(*args, **kwargs)
 File "/home/runner/HandmadeLivelyLines/music.py", line 44, in play
 source = await discord.FFmpegOpusAudio.from_probe(url2, **FFMPEG_OPTIONS)
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/player.py", line 387, in from_probe
 return cls(source, bitrate=bitrate, codec=codec, **kwargs)
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/player.py", line 324, in __init__
 super().__init__(source, executable=executable, args=args, **subprocess_kwargs)
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/player.py", line 138, in __init__
 self._process = self._spawn_process(args, **kwargs)
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/player.py", line 144, in _spawn_process
 process = subprocess.Popen(args, creationflags=CREATE_NO_WINDOW, **subprocess_kwargs)
 File "/usr/lib/python3.8/subprocess.py", line 858, in __init__
 self._execute_child(args, executable, preexec_fn, close_fds,
 File "/usr/lib/python3.8/subprocess.py", line 1639, in _execute_child
 self.pid = _posixsubprocess.fork_exec(
TypeError: expected str, bytes or os.PathLike object, not dict

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/bot.py", line 939, in invoke
 await ctx.command.invoke(ctx)
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/core.py", line 863, in invoke
 await injected(*ctx.args, **ctx.kwargs)
 File "/opt/virtualenvs/python3/lib/python3.8/site-packages/discord/ext/commands/core.py", line 94, in wrapped
 raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: expected str, bytes or os.PathLike object, not dict



Thanks.


-
Best way to stream FFMPEG conversion to HTML5 video element ?
4 août 2021, par SamTheFamContext


Me and my friend are currently working on a media player desktop application using Electron (which implements website code into a desktop application). For the video element we are using the HTML5 video element, and loading a video from the user's local machine. Most browsers only support MP4, OGG, and WebM (Mozilla Specification), but we would like to support more formats to make it applicable to more users. For this reason, we decided to use the FFMPEG video converter. As FFMPEG takes some time to convert videos, we would like to have a streaming implementation that converts the video while playing the HTML5 video.


The Problem


The problem is that we cannot figure out the best way to stream our video whilst it is converting to another format. The primary issue is that we are unsure how we can implement skipping in the video, where we convert that specfic point if skipped to.


For further clarification of the issue, if a user provided a
.mkv
file, we would start the FFMPEG conversion to.mp4
, and would convert enough for the video to start playing. While the video is being watched, FFMPEG would continue converting before the user has reached that point in the video. If a user wanted to skip through the video, we would need a way to fast-forward the conversion to a specific point in the video, thus allowing the user to watch that section of the video, if it has not already been converted.

Ideally we would like the conversion to be faster than the playback of the video, but if this has a cost in quality, we would prefer to make it optional to the end user.


The first solution that came to our heads was to convert small sections of the video, and produce many small (3-5 second) videos, that would be loaded into the video element respectively. The issue here is that the video element takes quite a lot of time to load new source files and as a result make an awful user experience, with the video going black every x seconds.


Another possible solution we stumbled accross was mentioned here. If we were to implement this, we would start a HTTP server on the user's local machine. The issue we had with this is that it could be a rather bloated solution and could be tedious to maintain. Along with this, it would likely slow down the start time for the application, and the runtime as well.


Thank you in advance, we look forward to your responses.


-
Is there an efficient way to retrieve frames from a video in Android ?
28 mars 2015, par NaveedI have an app which requires me to retrieve frames from a video and do some processing with them. However it seems like that the frame retrieval is very slow to the point where it is unacceptable. Sometimes it is taking upto 2.5 second to retrieve a single frame. I am using the MediaMetadataRetriever as most stackoverflow questions suggested. However the performance is very bad. Here is what I have :
private List<bitmap> retrieveFrames() {
MediaMetadataRetriever fmmr = new MediaMetadataRetriever();
fmmr.setDataSource("/path/to/some/video.mp4");
String strLength = fmmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
long milliSecs = Long.parseLong(strLength);
long microSecLength = milliSecs * 1000;
Log.d("TAG", "length: " + microSecLength);
long one_sec = 1000000; // one sec in micro seconds
ArrayList<bitmap> frames = new ArrayList<>();
int j = 0;
for (int i = 0; i < microSecLength; i += (one_sec / 5)) {
long time = System.currentTimeMillis();
Bitmap frame = fmmr.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST);
j++;
Log.d("TAG", "Frame number: " + j + " Time taken: " + (System.currentTimeMillis() - time));
// commented out because each frame would be written to disk instead of holding them in memory
// frames.add(frame);
}
fmmr.release();
return frames;
}
</bitmap></bitmap>The above will logs :
03-26 21:49:29.781 13213-13239/com.example.naveed.myapplication D/TAG﹕ length: 4949000
03-26 21:49:30.187 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 1 Time taken: 406
03-26 21:49:30.779 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 2 Time taken: 592
03-26 21:49:31.578 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 3 Time taken: 799
03-26 21:49:32.632 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 4 Time taken: 1054
03-26 21:49:33.895 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 5 Time taken: 1262
03-26 21:49:35.382 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 6 Time taken: 1486
03-26 21:49:37.128 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 7 Time taken: 1746
03-26 21:49:39.077 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 8 Time taken: 1948
03-26 21:49:41.287 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 9 Time taken: 2210
03-26 21:49:43.717 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 10 Time taken: 2429
03-26 21:49:44.093 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 11 Time taken: 376
03-26 21:49:44.707 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 12 Time taken: 614
03-26 21:49:45.539 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 13 Time taken: 831
03-26 21:49:46.597 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 14 Time taken: 1057
03-26 21:49:47.875 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 15 Time taken: 1278
03-26 21:49:49.384 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 16 Time taken: 1508
03-26 21:49:51.112 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 17 Time taken: 1728
03-26 21:49:53.096 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 18 Time taken: 1983
03-26 21:49:55.315 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 19 Time taken: 2218
03-26 21:49:57.711 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 20 Time taken: 2396
03-26 21:49:58.065 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 21 Time taken: 354
03-26 21:49:58.640 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 22 Time taken: 574
03-26 21:49:59.369 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 23 Time taken: 728
03-26 21:50:00.112 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 24 Time taken: 742
03-26 21:50:00.834 13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 25 Time taken: 721As you can see from above, it is taking about 18 - 25 sec to retrieve 25 frames from a 4 sec long video.
I have also tried this which uses FFmpeg underneath to do the same. I am not sure how well this library is implemented but it only improves the over all performance by a couple of seconds meaning it takes about 15-20 sec to do the same.
So my question is : is there a way to do it quicker ? My friend has an iOS app where he does something similar but it only takes couple of seconds and he is grabbing even more frames however he is not sure how to do it on android.
Is there anything on android that would speed up the process. Am I approaching this wrong ?
The end goal is to stitch those frames together into a gif.