
Recherche avancée
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (12824)
-
Android : Recording and Streaming at the same time
23 avril 2020, par Bruno SiqueiraThis is not really a question as much as it is a presentation of all my attempts to solve one of the most challenging functionalities I was faced with.



I use libstreaming library to stream realtime videos to Wowza Server and I need to record it at the same time inside the SD card. I am presenting below all my attempts in order to collect new ideias from the community.



Copy bytes from libstreaming stream to a mp4 file



Development



We created an interception in libstreaming library to copy all the sent bytes to a mp4 file. Libstreaming sends the bytes to Wowza server through a LocalSocket. It users MediaRecorder to access the camera and the mic of the device and sets the output file as the LocalSocket's input stream. What we do is create a wrapper around this input stream extending from InputStream and create a File output stream inside it. So, every time libstreaming executes a reading over the LocaSocket's input stream, we copy all the data to the output stream, trying to create a valid MP4 file.



Impediment



When we tried to read the file, it is corrupted. We realized that there are meta information missing from the MP4 file. Specifically the moov atom. We tried to delay the closing of the streaming in order to give time to send this header (this was still a guessing) but it didn't work. To test the coherence of this data, we used a paid software to try to recover the video, including the header. It became playable, but it was mostly green screen. So this became an not trustable solution. We also tried using "untrunc", a free open source command line program and it couldn't even start the recovery, since there was no moov atom.



Use ffmpeg compiled to android to access the camera



Development



FFMPEG has a gradle plugin with a java interface to use it inside Android apps. We thought we could access the camera via command line (it is probably in "/dev/video0") and sent it to the media server.



Impediment



We got the error "Permission Denied" when trying to access the camera. The workaround would be to root the device to have access to it, but it make the phones loose their warranty and could brick them.



Use ffmpeg compiled to android combined with MediaRecorder



Development



We tried to make FFMPEG stream a mp4 file being recorded inside the phone via MediaRecorder



Impediment



FFMPEG can not stream MP4 files that are not yet done with the recording.



Use ffmpeg compiled to android with libstreaming



Development



Libstreaming uses LocalServerSocket as the connection between the app and the server, so we thought that we could use ffmpeg connected with LocalServerSocket local address to copy the streaming directly to a local file inside the SD card. Right after the streaming started, we also ran the ffmpeg command to start recording the data to a file. Using ffmpeg, we believed that it would create a MP4 file in the proper way, which means with the moov atom header included.



Impediment



The "address" created is not readable via command line, as a local address inside the phone. So the copy is not possible.



Use OpenCV



Development



OpenCV is an open-source, cross-platform library that provides building blocks for computer vision experiments and applications. It offers high-level interfaces for capturing, processing, and presenting image data. It has their own APIs to connect with the device camera so we started studding it to see if it had the necessary functionalities to stream and record at the same time.



Impediment



We found out that the library is not really defined to do this, but more as image mathematical manipulation. We got even the recommendation to use libstreaming (which we do already).



Use Kickflip SDK



Development



Kickflip is a media streaming service that provides their own SDK for development in android and IOS. It also uses HLS instead of RTMP, which is a newer protocol.



Impediment



Their SDK requires that we create a Activity with camera view that occupies the entire screen of the device, breaking the usability of our app.



Use Adobe Air



Development



We started consulting other developers of app's already available in the Play Store, that stream to servers already.



Impediment



Getting in touch with those developers, they reassured that would not be possible to record and stream at the same time using this technology. What's more, we would have to redo the entire app from scratch using Adobe Air.



UPDATE



Webrtc



Development



We started using WebRTC following this great project. We included the signaling server in our NODEJS server and started doing the standard handshake via socket. We were still toggling between local recording and streaming via webrtc.



Impediment



Webrtc does not work in every network configuration. Other than that, the camera acquirement is all native code, which makes a lot harder to try to copy the bytes or intercept it.


-
lavu/channel_layout : change av_get_channel_layout() behavior at the next bump
4 octobre 2013, par Stefano Sabatinilavu/channel_layout : change av_get_channel_layout() behavior at the next bump
The new syntax is preferred since it allows backward syntax compatibility
with libswr when switching to the new option handling code with
AV_OPT_TYPE_CHANNEL_LAYOUT.With the new parser the string :
1234is interpreted as a channel layout mask, rather than as a number of
channels, and thus it’s compatible with the current way to set a channel
layout as an integer (e.g. for the icl and ocl options) making use of
integer option values.ff_get_channel_layout() with compat=0 will be used in the
AV_OPT_TYPE_CHANNEL handler code.The user is encouraged to switch to the new forward compatible syntax,
which requires to put a trailing "c" when specifying a layout as a number
of channels. -
ffmpeg : Create a fake shadow below alpha channel webm/png sequence
6 mai 2021, par Beneos BattlemapsPurpose : I'd like to render out animated 3D meshes as png sequence to use them as animated tokens for virtual tabletop games. To make the mesh looks more natural I'd like to create a fake show beneath the actual token.


Problem : I have a png sequence
1
(as well as a webm file created with ffmpet out of this png sequence if it makes it easier) with alpha channel. To create the webm I use :
ffmpeg -framerate 24 -f image2 -i Idle_Top.%04d.png -c:v libvpx-vp9 -crf 25 -pix_fmt yuva420p Idle_Top.webm
(If its relevant). I'd like to render out the png sequence to a webm file that have the current images as well as the transparent shadow beneath the token combined.

Possible workflow : I think a good way to achieve the wanted shadow effect is to use the alpha channel image as a mask on a black picture with the same resolution as the source image
2
. Then you have a complete black version of the image. Then you need to place this image beneath the colored image and make a offset of 10px left and 10px down to create the ilusion of perspective3
. At the end the black image below the colored image must have a transparency as well ( 30% visibility should be enough)4
.



Assets : I've put the webm file and the png files on my gDrive https://drive.google.com/drive/folders/1wznGaPwhKc2UyPpSZBSISa1gs3oixsHR?usp=sharing


Though I work with ffmpeg on a regular basis I have no clue where to start. Can you please help me out with this interesting problem ?


Best regards
Ben