
Recherche avancée
Médias (3)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (96)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (10911)
-
Playing RTP stream on Android 4.1.2 (Jelly Bean) [closed]
27 décembre 2024, par Homie_TomieI'll try to keep it quick. Using FFMPEG I started a stream on my PC. Here is the code :


import subprocess

def start_stream():
 command = [
 'ffmpeg',
 '-f', 'gdigrab', # Desktop capture (Windows)
 '-framerate', '15', # Low framerate for higher performance
 '-i', 'desktop', # Capture desktop
 '-c:v', 'libx264', # Video codec (H.264)
 '-preset', 'ultrafast', # Ultra-fast encoding preset for minimal latency
 '-tune', 'zerolatency', # Zero latency for real-time streaming
 '-x264opts', 'keyint=15:min-keyint=15:no-scenecut', # Frequent keyframes
 '-b:v', '500k', # Low bitrate to minimize data usage and reduce latency
 '-s', '800x480', # Resolution fits phone screen and helps performance
 '-max_delay', '0', # No buffering, instant frame output
 '-flush_packets', '1', # Flush packets immediately after encoding
 '-f', 'rtp', # Use mpegts as the container for RTP stream
 'rtp://192.168.72.26:1234', # Stream over UDP to localhost on port 1234
 '-sdp_file', 'stream.sdp' # Create SDP file
 ]
 
 try:
 print("Starting stream...")
 subprocess.run(command, check=True)
 except subprocess.CalledProcessError as e:
 print(f"Error occurred: {e}")
 except KeyboardInterrupt:
 print("\nStream interrupted")

if __name__ == "__main__":
 print("Starting screen capture...")
 start_stream()



Now, when I start the stream I can connect to it in VLC when I open up the stream.sdp file. Using the same method I can open up the stream on my iPhone, but when I try to open it on my old Android phone the stream connects but the screen is black. However, when I turn the screen I can see the first frame that was sent to the phone. Why does the stream not work ?


I will be thankful for any and all advice :)


-
matplotlib 3D linecollection animation gets slower over time
15 juin 2021, par Vignesh DesmondI'm trying to animate a 3d line plot for attractors, using Line3DCollection. The animation is initally fast but it gets progressively slower over time. A minimal example of my code :


def generate_video(nframes):

 fig = plt.figure(figsize=(16, 9), dpi=120)
 canvas_width, canvas_height = fig.canvas.get_width_height()
 ax = fig.add_axes([0, 0, 1, 1], projection='3d')

 X = np.random.random(nframes)
 Y = np.random.random(nframes)
 Z = np.random.random(nframes)

 cmap = plt.cm.get_cmap("hsv")
 line = Line3DCollection([], cmap=cmap)
 ax.add_collection3d(line)
 line.set_segments([])

 def update(frame):
 i = frame % len(vect.X)
 points = np.array([vect.X[:i], vect.Y[:i], vect.Z[:i]]).transpose().reshape(-1,1,3)
 segs = np.concatenate([points[:-1],points[1:]],axis=1)
 line.set_segments(segs)
 line.set_array(np.array(vect.Y)) # Color gradient
 ax.elev += 0.0001
 ax.azim += 0.1

 outf = 'test.mp4'
 cmdstring = ('ffmpeg', 
 '-y', '-r', '60', # overwrite, 1fps
 '-s', '%dx%d' % (canvas_width, canvas_height),
 '-pix_fmt', 'argb',
 '-f', 'rawvideo', '-i', '-',
 '-b:v', '5000k','-vcodec', 'mpeg4', outf)
 p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

 for frame in range(nframes):
 update(frame)
 fig.canvas.draw()
 string = fig.canvas.tostring_argb()
 p.stdin.write(string)

 p.communicate()

generate_video(nframes=10000)



I used the code from this answer to save the animation to mp4 using ffmpeg instead of anim.FuncAnimation as its much faster for me. But both methods get slower over time and I'm not sure how to make the animation not become slower. Any advice is welcome.


Versions :
Matplotlib : 3.4.2
FFMpeg : 4.2.4-1ubuntu0.1


-
h264 lossless coding
29 septembre 2014, par cloudravenIs it possible to do completely lossless encoding in h264 ? By lossless, I mean that if I feed it a series of frames and encode them, and then if I extract all the frames from the encoded video, I will get the exact same frames as in the input, pixel by pixel, frame by frame. Is that actually possible ?
Take this example :I generate a bunch of frames, then I encode the image sequence to an uncompressed AVI (with something like virtualdub), I then apply lossless h264 (the help files claim that setting —qp 0 makes lossless compression, but I am not sure if that means that there is no loss at any point of the process or that just the quantization is lossless). I can then extract the frames from the resulting h264 video with something like mplayer.
I tried with Handbrake first, but it turns out it doesn’t support lossless encoding. I tried x264 but it crashes. It may be because my source AVI file is in RGB colorspace instead of YV12. I don’t know how to feed a series of YV12 bitmaps and in what format to x264 anyway, so I cannot even try.
In summary what I want to know if that is there a way to go from
Series of lossless bitmaps (in any colorspace) -> some transformation -> h264 encode -> h264 decode -> some transformation -> the original series of lossless bitmaps
If there a way to achieve this ?
EDIT : There is a VERY valid point about lossless H264 not making too much sense. I am well aware that there is no way I could tell (with just my eyes) the difference between and uncompressed clip and another compressed at a high rate in H264, but I don’t think it is not without uses. For example, it may be useful for storing video for editing without taking huge amounts of space and not losing quality and spending too much encoding time every time the file is saved.
UPDATE 2 : Now x264 doesn’t crash. I can use as sources either avisynth or lossless yv12 lagarith (to avoid the colorspace compression warning). Howerver, even with —qp 0 and a rgb or yv12 source I still get some differences, minimal but present. This is troubling, because all the information I have found on lossless predictive coding (—qp 0) claims that the whole encoding should be lossless, but I am unable to verifiy this.