
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (66)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (11490)
-
converting a "gif" to video using swift
3 décembre 2019, par James WoodrowI’ve looked around and found a few things here and there, mainly that I should be using AVAssetWriter to do this but I have 0 experience with this and video editing/creation so it doesn’t help me much since I can’t seem to find anything that does something I can modify easily (or not at my level of knowledge at least) so that it works as I intend it to.
I have an app which takes
n
photos everycft
(capture frame time which I get from a backend server) seconds (it’s a double for obvious reasons) I then display these frames using a UIImageView and the frames change everydft
(display frame time which I also get from a backend server and can be different fromcft
). Up until this point nothing complicated.now what is currently the workflow is that these frames are sent back to a server with any relevant information I want and then the server would use imagemagick to create a real gif file and ffmpeg to create a 15 seconds video using said gif.
the issue is this makes it so that my heroku server bills aren’t as low as I would like because of the limited memory on the dynos and the time it takes to generate these videos is of about 5-10 seconds I believe (not sure but it’s longer than I’d like)
So the idea I had was to make the app create the video since he already has all the information he needs for this, and then simply upload it with the rest of the frames and relevant data. Using bandwidth nowadays is much cheaper than buying extra processing power on a server.
- he has
n
frames to loop over - he has a float value representing how long each frame should last
dft
- he has a gpu or at least a much better cpu than the dynos heroku have to offer
I’ve also looked around to see if anyone made an extensive tutorial on how to use ffmpeg in swift but I still didn’t find anything at my level and I didn’t even find a tutorial per se, only some GitHub projects which were partially completed and/or without the original tutorial linked to understand the thought process.
I would appreciate any tips/code sample/tutorials on the subject.
I’m adding the ffmpeg command line equivalent to what I would love to be able to do (if I could use ffmpeg directly with iOS this could be nice too)
ffmpeg -framerate 100/13 -loop 1 -i frame%02d.png -c:v libx264 -r 100/13 -pix_fmt yuv420p -t 0:15 instagram.mp4
where basically I did
100 / (dft * 100)
for the input frame rate and just output at the same fps for 15 seconds. by the way if there are any ways to optimise this command to make it run faster without losing quality I might be able to keep the current way of functioning with heroku although I would still prefer some iOS solution. - he has
-
aacenc_pred : rework the way prediction is done
29 août 2015, par Rostislav Pehlivanovaacenc_pred : rework the way prediction is done
This commit completely alters the algorithm of prediction.
The original commit which introduced prediction was completely
incorrect to even remotely care about what the actual coefficients
contain or whether any options were enabled. Not my actual fault.This commit treats prediction the way the decoder does and expects
to do : like lossy encryption. Everything related to prediction now
happens at the very end but just before quantization and encoding
of coefficients. On the decoder side, prediction happens before
anything has had a chance to even access the coefficients.Also the original implementation had problems because it actually
touched the band_type of special bands which already had their
scalefactor indices marked and it’s a wonder the asserion wasn’t
triggered when transmitting those.Overall, this now drastically increases audio quality and you should
think about enabling it if you don’t plan on playing anything encoded
on really old low power ultra-embedded devices since they might not
support decoding of prediction or AAC-Main. Though the specifications
were written ages ago and as times change so do the FLOPS.Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>
-
Android JavaCV FFmpeg webstream to local static website
26 mars 2017, par Thomas DevoogdtFor my integrated test I’m working on an application that needs to provide a live stream to a locally hosted website. I’ve already built a working site that run’s on nanohttpd. This application performs also special image processing. Therefore I use JavaCV. The library is working perfectly and all cpp bindings are working too.
My question : How to set up a live stream that can directly be played in a static site hosted by nanohttpd ? - I am on the right way ?
My code :
init :
private void initLiveStream() throws FrameRecorder.Exception {
/* ~~~ https://github.com/bytedeco/javacv/issues/598 ~~~ */
frameRecorder = new FFmpegFrameRecorder("http://localhost:9090", imageWidth, imageHeight, 0);
frameRecorder.setVideoOption("preset", "ultrafast");
frameRecorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
frameRecorder.setAudioCodec(0);
frameRecorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
frameRecorder.setFormat("webm");
frameRecorder.setGopSize(10);
frameRecorder.setFrameRate(frameRate);
frameRecorder.setVideoBitrate(5000);
frameRecorder.setOption("content_type","video/webm");
frameRecorder.setOption("listen", "1");
frameRecorder.start();
}In my CameraView :
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Size size = camera.getParameters().getPreviewSize();
Frame frame = new AndroidFrameConverter().convert(data, size.width, size.height);
try {
if(frameRecorder!=null){
frameRecorder.record(frame);
}
} catch (FrameRecorder.Exception e) {
e.printStackTrace();
}
}Here is one of the stack traces that ar shown frequently in my search to the solution :
org.bytedeco.javacv.FrameRecorder$Exception: avio_open error() error -111: Could not open 'http://localhost:9090'
I couldn’t find any other thread addressing this specific issue.
Thanks in advance
EDIT
Thanks to Chester Cobus, Here is my used code :
Websocket :
//Constructor
AsyncHttpServer serverStream = new AsyncHttpServer();
List<websocket> sockets = new ArrayList<>();
//http://stackoverflow.com/a/33021907/5500092
//I'm planning to use more sockets. This is the only uniform expression I found.
serverStream.websocket("/((?:[^/]*/)*)(.*)", new AsyncHttpServer.WebSocketRequestCallback() {
@Override
public void onConnected(final WebSocket webSocket, AsyncHttpServerRequest request) {
String uri = request.getPath();
if (uri.equals("/live")) {
sockets.add(webSocket);
//Use this to clean up any references to your websocket
webSocket.setClosedCallback(new CompletedCallback() {
@Override
public void onCompleted(Exception ex) {
try {
if (ex != null)
Log.e("WebSocket", "Error");
} finally {
sockets.remove(webSocket);
}
}
});
}
}
});
//Updater (Observer pattern)
@Override
public void updated(byte[] data) {
for (WebSocket socket : sockets) {
socket.write(new ByteBufferList(data));
}
}
</websocket>Record Acitivy
private long start_time = System.currentTimeMillis();
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
long now_time = System.currentTimeMillis();
if ((now_time - start_time) > 250) {
start_time = now_time;
//https://forums.xamarin.com/discussion/40991/onpreviewframe-issue-converting-preview-byte-to-android-graphics-bitmap
Camera.Size size = camera.getParameters().getPreviewSize();
YuvImage image = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
image.compressToJpeg(new Rect(0, 0, size.width, size.height), 60, byteArrayOutputStream);
MainActivity.getWebStreamer().updated(byteArrayOutputStream.toByteArray());
}
}JavaScript
var socket;
var imageElement;
/**
* path - String.Format("ws://{0}:8090/live", Window.Location.HostName)
* image - HTMLImageElement
*/
function imageStreamer(path, image) {
imageElement = image;
socket = new WebSocket(path);
socket.onmessage = function(msg) {
var arrayBuffer = msg.data;
var reader = new FileReader();
reader.onload = function(e) {
imageElement.src = e.target.result;
};
reader.readAsDataURL(arrayBuffer);
};
}