
Recherche avancée
Autres articles (9)
-
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (3910)
-
How do I upscale an iOS App Preview video to 1080 x 1920 ? [closed]
12 avril 2024, par Benjamin ThielI just captured a video of my new app running on an iPhone 6 using QuickTime Player and a Lightning cable. Afterwards I created an App Preview project in iMovie, exported it and could successfully upload it to iTunes Connect.



Apple requires developers to upload App Previews in different resolutions dependent on screen size, namely :



- 

- iPhone 5(S) : 1080 x 1920 or 640 x 1136
- iPhone 6 : 750 x 1334 (what I have)
- iPhone 6+ : 1080 x 1920









Obviously, 1080 x 1920 is killing two birds with one stone. I know that upscaling isn't the perfect solution, but it's meeting my needs. Since I don't own a 6+, another recording session won't do the trick.



Unfortunately, iTunes Connect is extremely picky about what to accept. Here's what I tried, to no avail :



- 

- Handbrake, iMovie, QuickTime do not support upscaling
- MPEG Streamclip
ffmpeg -i input.mp4 -acodec copy -vf scale=1080:1920 output.mp4









Strangely enough, iTunes Connect keeps complaining about the wrong resolution when I try to upload the output.mp4 of ffmpeg.


-
FFMPEG : Recurring onMetaData for RTMP ? [on hold]
30 novembre 2017, par stevendesuFor whatever reason this was put on hold as "too broad", although I felt I was quite specific. So I’ll try rephrasing here :
My former understanding :
The RTMP Protocol involves sending several parallel streams of data as a series of packets, with an ID correlating to which stream they are a part of. For instance :
[VIDEO] <data>
[AUDIO] <data>
[VIDEO] <data>
[VIDEO] <data>
[SERVER] <metadata about="about" bandwidth="bandwidth">
[VIDEO] <data>
[AUDIO] <data>
...
</data></data></metadata></data></data></data></data>Then on the player side these packets are split up into separate buffers based on type (all video data is concatenated, all audio data is concatenated, etc)
One of the packet types is called
onMetaData
(ID : 0x12)An
onMetaData
packet includes a timestamp for when to trigger the metadata (this way it can be synchronized with the video) as well as the contents of the metadata (a text string)My setup :
I’m using Red5Pro as my ingest server to take in an RTMP stream and then watch this stream via WebRTC. When an
onMetaData
packet is received by Red5, it sends out a JSON object to all subscribers of the stream over WebSockets with the contents of the stream.What I want :
I want to take advantage of this
onMetaData
channel to embed the server’s system clock into a stream. This way anyone viewing the stream can determine when (according to the server) a stream was encoded and, if they synchronize their clock with the server, they can then compute the end-to-end latency of the stream. Due to Red5’s use of WebSockets to send metadata this isn’t a perfect solution (you may receive the metadata before or after you actually receive the video information), however I have some plans to work around this.In other words, I want my stream to look like this :
[VIDEO] <data>
[AUDIO] <data>
[ONMETADATA] time: 2:05:77.382
[VIDEO] <data>
[VIDEO] <data>
[SERVER] <metadata about="about" bandwidth="bandwidth">
[VIDEO] <data>
[ONMETADATA] time: 2:05:77.423
[AUDIO] <data>
...
</data></data></metadata></data></data></data></data>What I would like is to generate this stream (with the server’s current time periodically embedded into the
onMetaData
channel) using FFMPEGSimpler problem :
FFMPEG offers a
-metadata
command-line parameter.In my experiments, using this parameter caused a single
onMetaData
event to be fired including things like "title", "author", etc. I could not inject additionalonMetaData
packets periodically as the stream progressed.Even if the metadata packets do not contain the system clock, if I could send any metadata packets periodically using FFMPEG then I could include something static like "the server’s clock at the time the broadcast started". I can then compare this to the current timestamp of the video and calculate the latency.
My confusion :
Continuing to look into this after creating my post, there are a couple things that I don’t fully understand or which don’t quite make sense to me. For one, if FFMPEG is only injecting a single
onMetaData
packet into the stream, then I would expect anyone joining the stream late to miss it. However when I join the stream 8 hours later I see Red5 send me the metadata packet complete with title, author, etc. So it’s almost like the metadata packet doesn’t have a timestamp associated with it but instead is just generic metadata about the videoFurthermore, there’s something called "AMF" which I’m not familiar with, but it may be important ?
Original Post
I spent today playing around with methods to embed the system clock at time of encode into a stream, so that I could compare this value to the same system clock at time of decode to get a rough estimate of RTMP latency. Unfortunately the majority of techniques I used ended up failing.
One thing I wanted to try next was taking advantage of RTMP’s
onMetaData
to send the current system clock periodically (maybe every 5 seconds) as part of the stream for any clients to listen for.Unfortunately FFMPEG’s
-metadata
option seems to only be for one-time metadata when the stream first loads. I can’t figure out how to add continuous (and generated) values to a stream.Is there a way to do this ?
-
Evolution #3638 (Nouveau) : Utiliser la rechercher Fulltext par défaut pour le critère {recherche}
7 janvier 2016, par Gilles VINCENTSuite à la discussion entammée sur spip-dev, je suggère d’utiliser la recherche Fulltext par défaut, au lieu des REGEXP actuelles
http://thread.gmane.org/gmane.comp.web.spip.devel/66780Par défaut, recherche génère des requêtes des type REGEXP, ce qui n’est utile que si des utilisateurs font des recherches avec des expressions régulières, ce qui n’est pas le cas. Depuis le commit r21697 les tables ont installées par défaut au format MyISAM (demande #2727). SPIP gagnerait donc à utiliser les recherches en Fulltext.
Analyse détaillée du problème faite par Remi (root@lautre.net) :
Le problème c’est que ce type de requête est généré dès lors qu’il y a un
espace dans la recherche et les requêtes REGEXP n’utilisent pas les index.Donc ça ne concerne pas que les utilisateurs qui recherchent des
expressions régulières.Pour une recherche aussi bête que "chercher un mot" ça recherche
REGEXP sur chaque colonne de la table
spip_articles. Evidemment ça renvoie tous les articles contenant "un".Un exemple particulièrement inefficace : 51s pour faire une recherche sur 2
mots sans avoir la certitude que les 2 mots ne s’y trouvent.- Query_time : 51.073814 Lock_time : 0.018906 Rows_sent : 16363 Rows_examined : 36816
SELECT t.id_article, t.surtitre, t.titre, t.soustitre, t.chapo, t.texte,
t.ps, t.nom_site, t.url_site, t.descriptif FROM `xxxxxx`.spip_articles AS t
WHERE t.surtitre REGEXP ’Etats unis|Etats|unis’ OR t.titre REGEXP ’Etats
unis|Etats|unis’ OR t.soustitre REGEXP ’Etats unis|Etats|unis’ OR t.chapo
REGEXP ’Etats unis|Etats|unis’ OR t.texte REGEXP ’Etats unis|Etats|unis’
OR t.ps REGEXP ’Etats unis|Etats|unis’ OR t.nom_site REGEXP ’Etats
unis|Etats|unis’ OR t.url_site REGEXP ’Etats unis|Etats|unis’ OR
t.descriptif REGEXP ’Etats unis|Etats|unis’ ;Ici, ça me renvoie les articles qui contiennent "punissons",
"réunis".A ce niveau, un bête like me met "que" 5s :
SELECT * from (select *, concat( t.surtitre, t.titre, t.soustitre,
t.chapo, t.texte, t.ps, t.nom_site, t.url_site, t.descriptif) search from
spip_articles t) s WHERE s.search like ’%unis%’ or s.search like
’%Etats%’ ;Mais honnêtement 5s, ce n’est pas non plus acceptable.
Mais s’il vous faut mettre un truc par défaut autre que du fulltext,
utilisez LIKE sur la chaine à rechercher. Ca ira plus vite et ça vous
renverra des résultats plus cohérents.Remi