
Recherche avancée
Autres articles (22)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (3663)
-
Using ffmpeg in Java
21 septembre 2014, par Riccardo BestettiI’m writing a Java program. I’m receiving a MPEG video stream via TCP and I have to decode it and re-encode it in a different format.
I have found JJMPEG, which are ffmpeg wrappers for Java, but they don’t get an update since 2 years ago. So I thought to implement my conversion by spawning a ffmpeg process from Java and piping the stream into it, and getting the data back from another pipe.
Would that be a optimal solution ? I need feedback from experienced people on both Java and ffmpeg programming.
-
The neutering of Google Code-In 2011
Posting this from the Google Summer of Code Mentor Summit, at a session about Google Code-In !
Google Code-In is the most innovative open-source program I’ve ever seen. It provided a way for students who had never done open source — or never even done programming — to get involved in open source work. It made it easy for people who weren’t sure of their ability, who didn’t know whether they could do open source, to get involved and realize that yes, they too could do amazing work — whether code useful to millions of people, documentation to make the code useful, translations to make it accessible, and more. Hundreds of students had a great experience, learned new things, and many stayed around in open source projects afterwards because they enjoyed it so much !
x264 benefitted greatly from Google Code-In. Most of the high bit depth assembly code was written through GCI — literally man-weeks of work by an professional developer, done by high-schoolers who had never written assembly before ! Furthermore, we got loads of bugs fixed in ffmpeg/libav, a regression test tool, and more. And best of all, we gained a new developer : Daniel Kang, who is now a student at MIT, an x264 and libav developer, and has gotten paid work applying the skills he learned in Google Code-In !
Some students in GCI complained about the system being “unfair”. Task difficulties were inconsistent and there were many ways to game the system to get lots of points. Some people complained about Daniel — he was completing a staggering number of tasks, so they must be too easy. Yet many of the other students considered these tasks too hard. I mean, I’m asking high school students to write hundreds of lines of complicated assembly code in one of the world’s most complicated instruction sets, and optimize it to meet extremely strict code-review standards ! Of course, there may have been valid complaints about other projects : I did hear from many students talking about gaming the system and finding the easiest, most “profitable” tasks. Though, with the payout capped at $500, the only prize for gaming the system is a high rank on the points list.
According to people at the session, in an effort to make GCI more “fair”, Google has decided to change the system. There are two big changes they’re making.
Firstly, Google is requiring projects to submit tasks on only two dates : the start, and the halfway point. But in Google Code-In, we certainly had no idea at the start what types of tasks would be the most popular — or new ideas that came up over time. Often students would come up with ideas for tasks, which we could then add ! A waterfall-style plan-everything-in-advance model does not work for real-world coding. The halfway point addition may solve this somewhat, but this is still going to dramatically reduce the number of ideas that can be proposed as tasks.
Secondly, Google is requiring projects to submit at least 5 tasks of each category just to apply. Quality assurance, translation, documentation, coding, outreach, training, user interface, and research. For large projects like Gnome, this is easy : they can certainly come up with 5 for each on such a large, general project. But often for a small, focused project, some of these are completely irrelevant. This rules out a huge number of smaller projects that just don’t have relevant work in all these categories. x264 may be saved here : as we work under the Videolan umbrella, we’ll likely be able to fudge enough tasks from Videolan to cover the gaps. But for hundreds of other organizations, they are going to be out of luck. It would make more sense to require, say, 5 out of 8 of the categories, to allow some flexibility, while still encouraging interesting non-coding tasks.
For example, what’s “user interface” for a software library with a stable API, say, a libc ? Can you make 5 tasks out of it that are actually useful ?
If x264 applied on its own, could you come up with 5 real, meaningful tasks in each category for it ? It might be possible, but it’d require a lot of stretching.
How many smaller or more-focused projects do you think are going to give up and not apply because of this ?
Is GCI supposed to be something for everyone, or just or Gnome, KDE, and other megaprojects ?
-
Webcam stream with FFMpeg on iPhone
6 décembre 2011, par SaphrositI'm trying to send and show a webcam stream from a linux server to an iPhone app. I don't know if it's the best solution, but I downloaded and installed FFMpeg on the linux server (following, for those who want to know, this tutorial).
FFMpeg is working fine. After a lots of wandering, I managed to send a stream to the client launchingffmpeg -s 320x240 -f video4linux2 -i /dev/video0 -f mpegts -vcodec libx264 udp://192.168.1.34:1234
where 192.168.1.34 is the address of the client. Actually the client is a Mac, but it is supposed to be an iPhone. I know the stream is sent and received correctly (tested in different ways).
However I didn't managed to watch the stream directly on the iPhone.
I thought of different (possible) solutions :-
first solution : store incoming data in a
NSMutableData
object. Then, when the stream ends, store it and then play it using aMPMoviePlayerController
. Here's the code :[video writeToFile:@"videoStream.m4v" atomically:YES];
NSURL *url = [NSURL fileURLWithPath:@"videoStream.m4v"];
MPMoviePlayerController *videoController = [[MPMoviePlayerController alloc] initWithContentURL:url];
[videoController.view setFrame:CGRectMake(100, 100, 150, 150)];
[self.view addSubview:videoController.view];
[videoController play];the problem of this solution is that nothing is played (I only see a black square), even if the video is saved correctly (I can play it directly from my disk using VLC). Besides, it's not such a great idea. It's just to make things work.
-
Second solution : use
CMSampleBufferRef
to store the incoming video. Much more problems comes with this solution : first of all, there's noCoreMedia.framework
in my system. Besides I do not get well what does this class represents and what should I do to make it works : I mean if I start (somehow) filling this "SampleBuffer" with bytes I receive from UDP connection, then it will automatically call theCMSampleBufferMakeDataReadyCallback
function I set during creation ? If yes, when ? When the single frame is completed or when the whole stream is received ? -
Third solution : use
AVFoundation
framework (neither this is actually available on my Mac). I did not understand if it's actually possible to start recording from a remote source or even from aNSMutableData
, achar*
or something like that. OnAVFoundation Programming Guide
I didn't find any reference that say if it's possible or not.
I don't know which one of this solution is the best for my purpose. ANY suggestion would be appreciate.
Besides, there's also another problem : I didn't use any segmenter program to send the video. Now, if I'm not getting wrong, segmenter needs to split the source video in smaller/shorter video easier to send. If it is right, then maybe it's not strictly necessary to make things work (may be added later). However, since the server is running under linux, I cannot use Apple's mediastreamsegmeter. May someone suggest an opensource segmenter to use in association with FFMpeg ?
UPDATE : I edited my question adding more informations on what I did since now and what my doubts are.
-