-
transmogrify.dexterity-1.6.3-1.lbn19.noarch
The transmogrify.dexterity package provides a transmogrifier pipeline section for updating field values of dexterity content objects. The blueprint name is transmogrify.dexterity.schemaupdater.
The schemaupdater section needs at least the path to the object to update. Paths to objects are always interpreted as being relative to the context. Any writable field who's id matches a key in the current item will be updated with the corresponding value.
Fields that do not get a value from the pipeline are initialized with their default value or get a missing_value marker. This functionality will be moved into a separate constructor pipeline...
The schmemaupdater section can also handle fields defined in behaviors.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.extract-0.4.0-1.lbn19.noarch
This Transmogrifier blueprint extracts text from within the specified CSS id.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.filesystem-1.0b6-1.lbn19.noarch
Transmogrifier source for reading files from the filesystem
This package provides a Transmogrifier data source for reading files, images and directories from the filesystem. The output format is geared towards constructing Plone File, Image or Folder content. It is also possible to add arbitrary metadata (such as titles and descriptions) to the content items, by providing these in a separate CSV file.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.htmlcontentextractor-1.0-4.lbn19.noarch
Helpful transmogrifier blueprints to extract text or html out of html content.
transmogrify.htmlcontentextractor.auto
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This blueprint has a clustering algorithm that tries to automatically extract the content from the HTML template.
This is slow and not always effective. Often you will need to input your own template extraction rules.
In addition to extracting Title, Description and Text of items the blueprint will output
the rules it generates to a logger with the same name as the blueprint.
Setting debug mode on templateauto will give you details about the rules it uses. ::
...
DEBUG:templateauto:'icft.html' discovered rules by clustering on 'http://...'
Rules:
text= html //div[@id = "dal_content"]//div[@class = "content"]//p
title= text //div[@id = "dal_content"]//div[@class = "content"]//h3
Text:
TITLE: ...
MAIN-10: ...
MAIN-10: ...
MAIN-10: ...
Options
-------
condition
TAL Expression to control use of this blueprint
debug
default is ''
transmogrify.htmlcontentextractor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This blueprint extracts out title, description and body from html
either via xpath, TAL or by automatic cluster analysis
Rules are in the form of ::
(title|description|text|anything) = (text|html|optional|tal) Expression
Where expression is either TAL or XPath
For example ::
[template1]
blueprint = transmogrify.htmlcontentextractor
title = text //div[@class='body']//h1[1]
_delete1 = optional //div[@class='body']//a[@class='headerlink']
_delete2 = optional //div[contains(@class,'admonition-description')]
description = text //div[contains(@class,'admonition-description')]//p[@class='last']
text = html //div[@class='body']
Note that for a single template e.g. template1, ALL of the XPaths need to match otherwise
that template will be skipped and the next template tried. If you'd like to make it
so that a single XPath isn't nessary for the template to match then use the keyword `optional` or `optionaltext`
instead of `text` or `html` before the XPath.
When an XPath is applied within a single template, the HTML it matches will be removed from the page.
Another rule in that same template can't match the same HTML fragment.
If a content part is not useful (e.g. redundant text, title or description) it is a way to effectively remove that HTML
from the content.
To help debug your template rules you can set debug mode.
For more information about XPath see
- http://www.w3schools.com/xpath/default.asp
- http://blog.browsermob.com/2009/04/test-your-selenium-xpath-easily-with-firebug/
HTMLContentExtractor
====================
This blueprint extracts out fields from html either via xpath rules or by automatic cluster
analysis
transmogrify.htmlcontentextractor
---------------------------------
You can define a series of rules which will get applied to the to the '_text'
of the input item. The rules use a XPATH expression or a TAL expression to
extract html or text out of the html and adds it as key to the outputted item.
Each option of the blueprint is a rule of the following form ::
(N-)field = (optional)(text|html|delete|optional) xpath
OR
(N-)field = (optional)tal tal-expression
"field" is the attribute that will be set with the results of the xpath
"format" is what to do with the results of the xpath. "optional" means the same
as "delete" but won't cause the group to not match. if the format is delete or optional
then the field name doesn't matter but will still need to be unique
"xpath' is an xpath expression
If the format is 'tal' then instead of an XPath use can use a TAL expression. TAL expression
is evaluated on the item object AFTER the XPath expressions have been applied.
For example ::
[template]
blueprint = transmogrify.htmlcontentextractor
title = text //div[@class='body']//h1[1]
_permalink = text //div[@class='body']//a[@class='headerlink']
_text = html //div[@class='body']
_label = optional //p[contains(@class,'admonition-title')]
description = optional //div[contains(@class,'admonition-description')]/p[@class='last']/text()
_remove_useless_links = optional //div[@id = 'indices-and-tables']
mimetype = tal string:text/html
text = tal python:item['_text'].replace('id="blah"','')
You can delete a number of parts of the html by extracting content to fields such as _permalink and _label.
These items won't get used be set used to set any properties on the final content so are effective as a means
of deleting parts of the html.
TAL expressions are evaluated after XPath expressions so we can post process the _text XPath to produce a text
stripped of a certain id.
N is the group number. Groups are run in order of group number. If
any rule doesn't match (unless its marked optional) then the next group
will be tried instead. Group numbers are optional.
Instead of groups you can also chain several blueprints togeather. The blueprint
will set '_template' on the item. If another blueprint finds the '_template' key in an item
it will ignore that item.
The '_template' field is the remainder of the html once all the content selected by the
XPATH expressions have been applied.
transmogrify.htmlcontentextractor.auto
--------------------------------------
This blueprint will analyse the html and attempt to discover the rules to extract out the
title, description and body of the html.
If the logger output is in DEBUG mode then the XPaths used by the auto extrator will be output
to the logger.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.pathsorter-1.0b4-2.lbn19.noarch
transmogrify.pathsorter is a blueprint for reordering items into tree sorted order
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.ploneremote-1.3-4.lbn19.noarch
transmogrifier.ploneremote is package of transmogrifier blueprints for uploading content via Zope XML-RPC API to a Plone site.
Plone site does not need any modifications, but vanilla Zope XML-RPC is used.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.print-0.6.0-1.lbn19.noarch
Note
As of version 1.3 Transmogrifier provides a similar feature, via a blueprint called: collective.transmogrifier.sections.logger.
This Transmogrifier blueprint is based on collective.transmogrifier.sections.tests.PrettyPrinter, which anyone can use in their project by creating a utility like so:
<utility
component="collective.transmogrifier.sections.tests.PrettyPrinter"
name="print" />
Then adding a section to your pipeline like so:
[transmogrifier]
pipeline =
…
print
[print]
blueprint = print
transmogrify.print has has two advantages over the above approach:
It adds the utility for you
It allows you to specify a keys parameter to print individual keys. If no key is provided, it prints the entire item.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.regexp-0.5.0-1.lbn19.noarch
transmogrify.regexp allows you to use regular expressions and format strings to search and replace key values in a transmogrifier pipeline.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.siteanalyser-1.3-3.lbn19.noarch
Transmogrifier blueprints that look at how html items are linked to gather metadata about items.
transmogrify.siteanalyser.defaultpage
Determines an item is a default page for a container if it has many links to items in that container.
transmogrify.siteanalyser.relinker
Fix links in html content. Previous blueprints can adjust the '_path' and set the original path to '_origin' and relinker will fix all the img and href links. It will also normalize ids.
transmogrify.siteanalyser.attach
Find attachments which are only linked to from a single page. Attachments are merged into the linking item either by setting keys or moving it into a folder.
transmogrify.siteanalyser.title
Determine the title of an item from the link text used.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19
-
transmogrify.sqlalchemy-1.0.2-1.lbn19.noarch
This package implements a simple SQLAlchemy blueprint for collective.transmogrifier.
If you are not familiar with transmogrifier please read its documentation first to get a basic
understanding of how you can use this package.
This package implements the transmogrify.sqlalchemy blueprint which executes a SQL statement,
generally a query, and feeds the return values from that query into the transmogrifier pipeline.
Configuration
A transmogrify.sqlalchemy blueprint takes two or more parameters:
dsn
Connection information for the SQL database. The exact format is documented in the SQLAlchemy
documentation for create_engine() arguments.
query*
The SQL queries that will be executed. Any parameter starting with ‘query’ will be executed,
in sorted order.
Located in
LBN
/
…
/
Plone and Zope
/
BastionLinux 19