Feature Proposal: Foswiki OO model need to be more comprehensive and unified.

Motivation

  • As Foswiki API is growing
  • As plugins are still stuck with procedural paradigm
  • As Foswiki internals are somewhat discrepant

I think a shift toward unified OO model is something really needed.

Description and Documentation

The idea of turning plugins into full-fledged objects was the starting point. Then by studying the guts I discovered that generally there is no real consistency in OO implementation across different modules. And then lately in AddTakeOutPutBackBlocksToFunc discussion I read about possible separation of Foswiki::Func namespace into several Foswiki:::API::namespaces because it's becoming too big.

Altogether I see it all as a good reason to:

  • unify OO model all across the codebase.
  • drift from procedural-based API towards OO-based. Make current Foswiki::Func a wrapper for backward compatibility.
  • make the plugins subsystem an equal participant of this shiny new world. smile

Sure enough this does not and will never involve any use of a monstrous OO toolkit like Moose or alike.

Unification of the model means to me that:

  1. There is a single ancestor of all Foswiki classes, Fosswiki::Object. It handles most of the burden of low-level creation and initialization of an object.
  2. There is a system of basic and no so basic object properties. Direct references to keys on the object hash are to be gone and replaced with methods.
  3. There is a system of hierarchical exceptions to make error handling more grained and consequently better handled. Hopefully.

The properties encapsulation must ensure that a class developer has his hands untied and not be worried too much about class/object usage by the rest of the code. For example, if somewhere a session topic is requested in a form $session->topic, then who cares if developer adds some additional manipulations before returning the topic object back to the calling code?

Generally, this way we also move from API calls like Foswiki::Func::getUrlHost() (or, worse, Foswiki::API::Session::getUrlHost()) to something like $session->getUrlHost(). To my view this kind of calls are cleaner and have better readability.

A few words on plguins. Their current design based on pure procedural calls lacks of data encapsulation. Imagine that a plugin require some topic-related data to be stored between calls to handlers. Imagine then that the topic includes another another one and the plugin has to handle it separately but stored data from the parent topic will affect the results.

Sure, there is always a solution. But there will be always be a catch. For example, by storing local data on per-web.topic basis a plugin developer will forget about inclusions from the very same topic. But why putting this burden on a plugin developer shoulders when solution is the rule "one topic – one plugin instance"?

Now about concerns. First of all, there will be worries about backward compatibility. I'm sure it is possible to keep it. The exact means are to be considered in each particular case but I see no fundamental obstacles. Plugins are to be specially mentioned here because they're to be the most radically changed. In my view the only viable solution to them is an introduction of static $API_VERSION variable which would refer to minimal API version supported by the plugin. In the absence of this variable plugin is considered an old-style and will be handled correspondingly. In the future this would let Foswiki to filter out unsupported plugins too.

Another concern is memory consumption which will definitely grow because of "one topic, one plugin" paradigm. But I think the penalty isn't gonna be that big partly because a plugin storing data on per-topic basis is consuming memory already. Partly because I hope that any include would be properly destroyed right at the moment it is not needed anymore – together with all the plugins bound to it. It is possible to have some optimizations too to avoid loading of unnecessary plugins.

I'd like to specially make a note on George Clark concern about the freedom of refactoring/redesigning the internals raised in AddTakeOutPutBackBlocksToFunc topic. In my view there is no difference between Foswiki::Func API functions and documented API class methods. Since the moment a method is documented and declared public it becomes immutable to changes as much as any of Foswiki::Func function. Of course, any redesign may have so much influence over the code logic that the method meaning itself would become obsolete – but this may happen any of the API function too. Actually we may consider API functions as methods of obscure objects. The difference is obvious: we cannot choose what object these methods operate upon.

Examples

  • Now:
$urlHost = Foswiki::Func::getUrlHost();

sub getUrlHost {
    return $Foswiki::Plugins::SESSION->{urlHost};
}

  • Proposal:
$urlHost = $self->session->getUrlHost();

sub getUrlHost {
    return $_[0]->{urlHost};
}

In cases when session property is being heavily used within the scope and consequently is pre-fetched into a variable the first line would become even more terse.

Core/API changes to be considered.

In this section I'd like to start a discussion on particular changes in the core infrastructure. Perhaps this is a subject for a separate proposal but I would keep it here for a while.

Foswiki::Meta

As Moose/Moo OO model does not tolerate direct access to the keys on object hash attribute meta has been introduced on Foswiki::Meta class which stores the metadata hash. Where it was $topicObject->{FILEATTACHMENT} before it is now $topicObject->meta->{FILEATTACHMENT}. With this change it is not possible to maintain backward compatibility anymore. Consequently it's a good time to reconsider Foswiki::Meta outlines.

For now Foswiki::Meta is somewhat awkward mixture of topic text, meta-data and somewhat related methods like populateNewWeb(). There are only two ways out of this situation:

  1. Total unification, text becomes another key on the meta-data hash and will be distinguished from other metas by text-specific methods of the universal object only. Foswiki::Meta becomes Foswiki::Entity.
  2. Total separation. Foswiki::Meta gets split into Foswiki::Entity which is focused on web and topic general and text handling; and into actualy Foswiki::Meta which is all about meta-data and is been used by Foswiki::Entity by casting the meta attribute into Foswiki::Meta.

ALERT! meta is a no good name for the attribute because a method with the same name is declared on the root class Moo::Object. Depending on Foswiki::Meta destiny this attribute shall be choosen a new name.

In both cases class Foswiki::Entity (actual name doesn't matter but this particular one reflects the object's functionality in a best way) must replace Foswiki::Meta. In second case it'd be nice to consider renaming Foswiki::Meta too to avoid possible incompatibility conflicts with the old code. Foswiki::META an obvious option.

The main advantage of the universal approach is that it opens the way for further restructuring of the OO model by having separate Foswiki::Web and Foswiki::Topic classes deriving from Foswiki::Entity but clearly separating web management from topic management. This kind of separation would prove itself useful when it comes to errorproof design with cleaner code.

Separation of the text and the meta makes usage of separate stores for them easier. API namespace of the entity (web/topic) object could be cleaned up by delegating only selected methods from the meta attribute.

Additional level of abstraction is possible in both cases by giving every meta-data key it's own class. But this is too far away perspective.

Progress status.

To avoid overgrowing this topic into a monstrous article I'm keeping track of what major changes are done or planned on branch support task topic. I do not consider these as proposals because they're more or less inevitable consequences of this whole subject. Besides, some TODO ideas are in so embryonic stage that they simply are not worthwhile mentioning here until thoroughly considered.

Please, fill this in! Versions of Moo and Mouse modules available on different platforms.

To make a choice among two alternative OO-toolkits we need to know if they're available to different platforms. Please, follow HowToInstallCpanModules recommendations then check if Moo and Mouse packages are available and what are their versions. If there is no package for a module then put - in the corresponding cell.

OS/Distro with version Moo version Mouse version Comments
Mac OSX 10.11
2.0.2
2.4.5
With MacPorts 2.3.4
edit
FreeBSD 9
2.000002
2.4.5
Moo version is likely to to be equivalent to 2.0.2 on Mac.
edit
FreeBSD 10
2.000002
2.4.5
 
edit
Ubuntu server 15.10
2.000001
2.4.2
 
edit
CentOS 7.1
-
1.11
With epel-release installed
edit
FreeBSD 8.4
2.000002
2.4.5
 
edit

Impact

%WHATDOESITAFFECT%
edit

Implementation

-- Contributors: VadimBelman - 15 Dec 2015

Discussion

I like it. There are many parts of the code that will be affected by the outlined redesign. Yet still I am sure it will be of great benefit last but not least because the current procedural code base - especiallly the user code - is a real problem not only conceptually but also with regards to performance and extensibility.

-- MichaelDaum - 15 Dec 2015

Since i'm not an core developer and only hacking Foswiki for my local needs, I feeling the "Nolo contendere". Therefore everything bellow should be taken as remarks and not as an formal concerns.

The main point still is: I'm abosolutely and fully agree with the basics of the proposal - move to "more OO". smile

But after the yesterday's addition to the concept - i start feeling a bit uneasy. Foswiki (again once) going to invent something "own, inhouse" solution and that will be compatible only by itself. And regardless of the quality of the code, any developer must
  1. ) learn some new and "nonstandard" class builder (it is +- ok)
  2. ) but it hardens again a bit the "outsorcing".

Yes, talking about the "standard" for the class builders is strange, because every developer (at least once in his life) must "invent" his own "best fit" class builder, But for the long-term code I would be more happy with some widely-tested and accepted CPAN module. Of course, not need to use CPAN for every trivial code only because exists some CPAN, but the class-generation is IMHO one of an example for what is good to have the "wide acceptance". Especially when it introduces the sugar, (even if it could be easily changed) - sticking on "wide acepptance" helps here too.

While i'm aware the ALAPC (as little as possible CPAN) philosophy in Foswiki - please, reconsider the usage of some existing and widely used and accepted class generator.

Also, (sometime in the future) we will break the ALAPC anyway. The PSGI-ficication of the Foswiki will (probably) move an GREAT amount of code outside from the core of the Foswiki, to different, well-separated layers, as many different middlewares and like. Some of such middlewares uses many different CPAN modules, so we will need address how to solve the ALAPC problems not only for the OO system, but in general.

One example for all: the CPAN:Plack::Middleware::Session is the one of most needed Plack middleware. Best if it is configured using the CPAN:CHI caching drivers for session storage. The CPAN:CHI is an matured and very good caching system. But, using the Session middleware with the CHI means installing tens of CPAN modules (for example the CPAN:Moo too) check the dependencies for CHI.

Myself, (for every web-app-development needs) using CPAN:Poet / CPAN:Mason. Both are CPAN:Moose based. The Moose is a bit slower at the startup. While I agree, it isn't the best for the CGI based applications (as Foswiki), it works perfectly fast in persistent environment - and really greatly reduces the bugs and speeds up the developemnt. The "slow", "bloated" and such adjectives are (IMHO) mostly in a category FUD. smile

For an quick perl-hacking myself using CPAN:Mo. It could be inlined as "one long line" into the source code, - so, zero dependency - but still comes with Moose compatible syntactic sugar (even could use the Moose - if configured to do).

So, my conclusion - would be nice to use one "widely accepted" OO-system, or at least the inhouse-developed one should have an "Moose compatible" sugar,
  • which in the current ALAPC environment will not pull the half of the CPAN
  • but on the other side allows for the local hackers easily swap to the full-blown CPAN:Moose

-- JozefMojzis - 17 Dec 2015

If someone want to do some benchmarks, attached an quickly hacked code, which the following results:

Mix - create, access - 10_000_000 times
Benchmark: timing 10000000 iterations of bless, mo, moo, moose, mouse, otiny...
     bless: 11 wallclock secs (10.31 usr +  0.00 sys = 10.31 CPU) @ 969932.10/s (n=10000000)
        mo: 15 wallclock secs (14.74 usr +  0.01 sys = 14.75 CPU) @ 677966.10/s (n=10000000)
       moo: 15 wallclock secs (14.84 usr +  0.00 sys = 14.84 CPU) @ 673854.45/s (n=10000000)
     moose: 25 wallclock secs (24.52 usr +  0.00 sys = 24.52 CPU) @ 407830.34/s (n=10000000)
     mouse: 14 wallclock secs (13.94 usr +  0.00 sys = 13.94 CPU) @ 717360.11/s (n=10000000)
     otiny: 10 wallclock secs ( 9.73 usr +  0.00 sys =  9.73 CPU) @ 1027749.23/s (n=10000000)
           Rate moose   moo    mo mouse bless otiny
moose  407830/s    --  -39%  -40%  -43%  -58%  -60%
moo    673854/s   65%    --   -1%   -6%  -31%  -34%
mo     677966/s   66%    1%    --   -5%  -30%  -34%
mouse  717360/s   76%    6%    6%    --  -26%  -30%
bless  969932/s  138%   44%   43%   35%    --   -6%
otiny 1027749/s  152%   53%   52%   43%    6%    --

Accessors: 20_000_000 times
Benchmark: timing 20000000 iterations of bless, mo, moo, moose, mouse, otiny...
     bless:  6 wallclock secs ( 6.70 usr +  0.00 sys =  6.70 CPU) @ 2985074.63/s (n=20000000)
        mo:  7 wallclock secs ( 6.92 usr +  0.00 sys =  6.92 CPU) @ 2890173.41/s (n=20000000)
       moo:  3 wallclock secs ( 2.35 usr +  0.00 sys =  2.35 CPU) @ 8510638.30/s (n=20000000)
     moose:  7 wallclock secs ( 6.26 usr +  0.00 sys =  6.26 CPU) @ 3194888.18/s (n=20000000)
     mouse:  3 wallclock secs ( 2.61 usr +  0.00 sys =  2.61 CPU) @ 7662835.25/s (n=20000000)
     otiny:  6 wallclock secs ( 5.56 usr +  0.00 sys =  5.56 CPU) @ 3597122.30/s (n=20000000)
           Rate    mo bless moose otiny mouse   moo
mo    2890173/s    --   -3%  -10%  -20%  -62%  -66%
bless 2985075/s    3%    --   -7%  -17%  -61%  -65%
moose 3194888/s   11%    7%    --  -11%  -58%  -62%
otiny 3597122/s   24%   21%   13%    --  -53%  -58%
mouse 7662835/s  165%  157%  140%  113%    --  -10%
moo   8510638/s  194%  185%  166%  137%   11%    --

Of course, such "benchmarks" doesn't provides any meaingful results - not real-world scenario. Also, is incomparable the 4 line manual class class with the full-blown Moose using an powerful metaprotocol. But at least allows to know - if someone will use the Moose, for every one milion accessor will slow down the application with about 22ms againist the manual accessor/getter. smile (6700/20 - 6260/20) = 22ms. Source code is attached.

-- JozefMojzis - 17 Dec 2015

Above the statement was made
While i'm aware the ALAPC (as little as possible CPAN) philosophy in Foswiki - ...

I would argue that this really isn't true. IMO, We make measured and well reasoned use of CPAN modules, but with some (probably unwritten) guidelines.
  • Modules should be generally available from official repositories across common distributions. RedHat, Centos, Suse, Debian, Ubuntu, Strawberry, Active State. ...
  • The required version of each CPAN module should be likewise commonly available. Using features of the "very latest" module is sure to cause issues. So "lowest common denominator" versions.
  • If practical, choose modules that have pure perl versions available in addition to the XS code. XS is important for performance, but Pure Perl implementations are needed for bundled packages, like CpanContrib, which are needed for some hosting sites.
  • Modules should not be dependent upon non-Perl packages which might not be available on some platforms.
  • Modules should be "well supported" upstream.
  • And keep the complexity down. Pulling in a single module that brings in dozens of additional modules, makes it all that much harder for installations.

We went through this with the recent addition of Email::MIME. We polled the available versions across many distributions, and changes were made to ensure that we didn't "require" versions and features that were not widely available.

So adding CPAN modules is not discouraged.

Anecdotally, and without data, my fear is that Moose is "too heavy". I vaguely recall installing other applications that resulted in very large - (many dozens?) of modules pulled in by Moose, and some of which I had to manually install using CPAN. But I can't back this up with any facts.

-- GeorgeClark - 17 Dec 2015

There are only two sensible options:

  1. leave things like they are and live with it
  2. improve oo model using Moo or Mouse

I don't think that inventing YAOOS (yet another oh-oh system) is an option.

-- MichaelDaum - 17 Dec 2015

Ok, so far I give my voice to Moo but will investigate into both Moo and Mouse. So far Moo seems to be more like the choice because they claim to be the fastest one to start. This is crucial for CGI, of course. It's dependant on Try::Tiny which is the first candidate for exceptions handling.

Anyway, the draft is to be totally reconsidered. I remove it from this topic until a new one is ready. What will remain unchanged anyway is Foswiki::Object as the root for all other Foswiki classes.

-- VadimBelman - 17 Dec 2015

I favour Moo as well:
  1. pure perl
  2. packages for debian, redhat and ubuntu, gentoo; albeit only rough checking so far
  3. some XS options are available for speed - were the above benchmarks using these?
  4. Moose could be optionally be installed for extra performance (needs testing)
    • In a persistent set-up the longer start-up may not be relevant, or at least a worthwhile trade off

I have another FP to write: clean up and add structure to Foswiki tools. It was already in my mind to use Moo there.

This will be my 1st time with Moo in real code and would allow me to refine the ideas without impacting core, yet still deliver something useful. Crucially I believe an OO framework is important to get right and the community needs to actually work with it -- i.e. test if developing with it is more productive and helps to create good code.

I'll write this FP up and let you ponder these together.

-- JulianLevens - 17 Dec 2015

Juilian, I'd like to make few corrections:

1. Mouse is optionally XS. Depends on how it's been built. It doesn't have dependencies – definetely a plus. 2. CentOS doesn't have a package for Moo. Big disappointment to me. 4. Unfortunately, persistent setup is an option. Until CGI is declared officially dead we have to support it. Until then Moose is definitely out of the equation.

And I do looking forward at tools clean up! Especially if all this zoo gets clear documentation. smile

-- VadimBelman - 18 Dec 2015

I have done a bunch of tests and must admit that my mind has changed. Mouse is the preferred choice for sure. To prove it I have added a table above with some interesting results:

  • Moo has more dependencies and pollutes namespace more than Mouse.
  • Though Moo wins on simple 'use Moo' vs 'use Mouse' test it fails if a class with a property gets declared. Mouse wins by ~25% less time on object creation. Adding additional code to actually create an object and reference the property on it doesn't actually change much.

The only case where Mouse actually loses to Moo is memory consumption which is about twice as high than it is for Moo... except for an outdated FreeBSD 8.0 where same module versions displays slight but clear win to Mouse. That's confusing. To clarify the case I have run the test on a CentOS distro under a VM with same result as on OSX – Mouse twice as more memory greedy.

It is very likely that there is a test which would show Moo supremacy over Mouse in speed. But I thinks that would only prove the fact that these two are of no big difference. Same time Mouse looks more featureful while Moo is missing from CentOS repos.

Conclusion: Go for Mouse.

-- VadimBelman - 18 Dec 2015

Vadmin, thanks for the detailed benchmarking! What do you think how hard is it to switch from Moo to Mouse or vice versa? I know both cover different feature sets but my guess is that we won't be using the more exotic once in Foswiki (typechecking and whatever, not sure really). So given we stay in an overlapping feature set, would it be possible to switch gears once we find out we betted on the wrong horse?

-- MichaelDaum - 18 Dec 2015

Michael, you touch the subject I forgot to mention yesterday. First of all, I do not think there will be such thing as 'wrong horse' unless real life benchmarks would prove different and much worse performance. But otherwise switching toolkit shouldn't a big issue. Especially considering that they added some features over the last years. For example, on this page it is stated that Mouse doesn't have BUILDARGS support. In fact, I found it in the code. CPAN:Mouse::Spec page also declares that this feature is now supported.

-- VadimBelman - 18 Dec 2015

Additional point of view - the Mr. Popularity - e.g. the modules which already developed by the given module - e.g. they direct reverese dependencies, or with other words - the given module will get loaded by:

  • https://metacpan.org/requires/distribution/Mouse?sort=[[2,1]]&size=500
    Mouse - count: 240
  • https://metacpan.org/requires/distribution/Moo?sort=[[2,1]]&size=500
    - Moo - count: 1166
  • https://metacpan.org/requires/distribution/Moose?sort=[[2,1]]&size=500
    - Moose - count: 2610

(BTW, how to correctly enter an URL which contains [[ - like above ones?)

Need to say, the recursive reverse deps, e.g. the lists of modules which uses Moo|Moose|Mouse indirectly (by other module) are of course much bigger. The recursive reverse deps could be gathered with the CPAN:App::CPAN::Dependents.

Also, it is worth to check which modules uses them. - for example

Also it is worth to consider: If Moo detects Moose being loaded, it will automatically register metaclasses for your Moo and Moo::Role packages, so you should be able to use them in Moose code without modification. - so, for Moose users - using Moo mean good level of compatibility. This feature isn't directly available for Mouse, afaik in Mouse you must change manually the Mouse to Moose for the switch - or need to use CPAN:Any::Moose .

I can't confirm the 25% faster object creation speed in favor of Mouse. My tests (on the same macbooks as Vadim's - 15"retina/2015) shows:
Create instance 10000000 times
Benchmark: timing 10000000 iterations of moo, mouse...
       moo: 13 wallclock secs (13.52 usr +  0.00 sys = 13.52 CPU) @ 739644.97/s (n=10000000)
     mouse: 12 wallclock secs (12.25 usr +  0.00 sys = 12.25 CPU) @ 816326.53/s (n=10000000)
          Rate   moo mouse
moo   739645/s    --   -9%
mouse 816327/s   10%    --

the code
#!/usr/bin/env perl
use 5.014;
use warnings;
use utf8;

package My::Moo {
    use Moo;
    has city => (is => 'rw');
};

package My::Mouse {
    use Mouse;
    has city => (is => 'rw');
};

my($moo, $mouse);
use Benchmark qw(:all);

my $c = 10_000_000;
say "Create instance $c times";
my $mix = timethese( $c, {
    'moo'   => sub { $moo   = My::Moo->new(city => 'Preßburg');  },
    'mouse' => sub { $mouse = My::Mouse->new(city => 'Preßburg');},
});
cmpthese($mix);

Testing the compile+load time is importany only for pure /cgi-bin based Foswiki. For any other engine - like FCGI, ModPerl etc... the code is compiled once and the important thing is the executions speed - e.g. the above instance creation speed - and of course the getter-accessor.

So, I voting for Moose - it is the best - but the Moose is already declined, so my second vote is for Moo smile :), because:
  • the automagical class-registering for Moose - if it detects the Moose
  • the Moo is used by some interesant CPAN modules, which probably will be used in Foswiki, so the Moo (with high probability) will get loaded anyway. (CPAN:CHI).

Also, we were talking now for the core dependencies, the Extensions could load any CPAN deps /like Image::Magick smile / - so, no limits to Moo or Moose or anything...

-- JozefMojzis - 18 Dec 2015

Joze, what I strictly disagree with is the ignoring of CGI. Whether we want it or not but it's still been widely used. We cannot simply tell users 'you do it wrong way' – it'll just bounce back to the project. This is why I tested overall load time of a script.

Take into account that CentOS 7 (dunno about other versions) doesn't have Moo neither in its standard package set nor in epel-release. Asking a user to install something from CPAN for the core to run? "Nah, thanks guys! I go choose another solution!" – this is what you will hear.

Actualy CentOS has really amazed me. Considering the difference in Moo/Mouse popularity why on earth have they chosen Mouse? smile

To be honest, I make no big difference between Moo and Mouse because they're quite similar in the supported features set. At least what I would expect from an OO toolkit they both do. The only 'but' which changes everything is CGI.

PS. At least you've done what I was reluctant to do yesterday – tested purely new() performance.

-- VadimBelman - 19 Dec 2015

Even though Joze has removed his comment but he pointed out to a serious error of mine. I did overlooked that OSX version of Mouse does use XS by default. Correspondingly FreeBSD version does it as well.

I have updated test results for pure perl Mouse code and it doesn't look that shiny anymore. Memory consuption has grown significally, script timings are now more like those for Moo on OSX and visibly worse on the old FreeBSD server.

So, I finally admit that by the summary of pros and contras Moo is the clear winner after all. The only issue is CentOS with it's incomplete repository. Any suggestions from linux gurus?

-- VadimBelman - 19 Dec 2015

As I understand it, Moo by design is pure perl including all dependencies. So, we could even ship it as a contrib, not ideal but doable -- there might be issues if we need particular versions.

Looking around the web as CentOS is based on RedHat it appears that a RedHat rpm should work on CentOS. This will need testing of course but as it's pure perl it really should be a no-brainer.

I'm really glad you raised this FP Vadim. I had be thinking about this for some time but didn't raise an FP. I'd been sucked into the idea that the FW community be hard to persuade (ALAPC etc).

I'm glad I'm wrong -- thanks for the excitement smile

-- JulianLevens - 19 Dec 2015

I'm more concerned that there is a pure-perl version available for the relatively few hosted sites that are unable to install modules and depend on the CpanContrib . I think that the vast majority of sites will use system packages, or install using cpanm or some other flavor, which will generally compile the XS versions when available. So performance concerns about pure-perl are IMHO quite low. My assumption is that the primary reason the XS exists in the first place is due to pure-perl performance issues. So as far as dependencies go, it just needs to be there.

-- GeorgeClark - 19 Dec 2015

Julian, if you get the rpm from RedHat then how would you guarantee that it's dependencies would be satisfied? I would rather tell people to install it manually when possible. Otherwise we put it into CpanContrib.

George, unfortunately there is no XS option for Moo.

-- VadimBelman - 20 Dec 2015

My understanding is that dependencies would be satifisfied within the RedHat context. There is a risk of incompatibilities when RedHat and CentOS packages are not quite in sync. I would have thought that's low risk in this pure perl case. If we ship in CpanContrib we would still have to worry about keeping up to date with the correct version of Moo and all its dependencies.

While neither of these is ideal I struggle to reject Moo on these grounds. Whereas I feel I must reject Mouse (not always XS free, and will be incompatible with many Cpan modules).

-- JulianLevens - 20 Dec 2015

For the completeness about Moo need to say, If a new enough version of CPAN:Class::XSAccessor is available, it will be used to generate simple accessors, readers, and writers for better performance. Simple accessors are those without lazy defaults, type checks/coercions, or triggers. ....

Also, in the Moo, for the attribute type checking is recommended to use the CPAN:Type::Tiny. It is an pure perl module. But, if the CPAN:Type::Tiny::XS is installed the CPAN:Type::Tiny will use it.

E.g. Using the CPAN:Class::XSAccessor and the CPAN:Type::Tiny::XS could boost the Moo speed (which is good enough even without any XS).

-- JozefMojzis - 20 Dec 2015

I think I'm being misunderstood. XS is fine. we have XS already. If it's available and gives a performance boost, great. All I'm saying is that when we update CpanContrib, we need to include non-XS versions of any modules to avoid platform dependencies. So XS is good. fallback to non-XS is strongly desired. If it's not available we risk leaving users who depend on the CpanContrib non-functional. (We already do unfortunately ... Some of the HTML modules don't have pure-perl, so Wysiwyg will probably be disfunctional. version module is XS only too.)

-- GeorgeClark - 20 Dec 2015

I have removed the benchmark table as it is now useless.

Commited few changes to the Item13897 branch to demonstrate possible directions. Don't know if it's reasonable to copy over here self-documents from the code.

-- VadimBelman - 29 Dec 2015

Great! As you probably know, I introduced most of the OO code in Foswiki core so far, so I can hardly argue against this initiative. The only word of warning I'd give is that maintaining backwards compatibility is not as easy as you might think. Even today, and even with all the attempts to document, there are undocumented behaviours of the Func API that will bite you. But don't let that stop you, go for it - you can't make an omelette without breaking eggs! Just make sure you test, test, and test again.

-- Main.CrawfordCurrie - 29 Dec 2015 - 08:35

Crawford, thank for the support! I would very much appreciate reviews, advices, consulting too.

-- VadimBelman - 29 Dec 2015

Just checked the new code in the github. smile

Me probably unable to see the whole picture yet, and i somewhat missed one sentence above: There is a single ancestor of all Foswiki classes, Fosswiki::Object. It handles most of the burden of low-level creation and initialization of an object. ,

so, have some questions to learn new things.
  • the provided BUILDARGS in the Foswiki::Object probably should be nice and should call the builtin (aka Moo provided) BUILDARGS if the @_newParameters isn't defined. Or it isn't necessary? (the builtin BUILDARGS is defined here: https://metacpan.org/source/HAARG/Moo-2.000002/lib/Moo/Object.pm - and does some more error checking.) Or it is possible re-implment them into your Foswiki::Object::BUILDARGS. Or isn't needed?
  • but mainly want to learn what is the main benefit to have one common single ancestor - aka Foswiki::Object as an common parent for all Foswiki classes, especially now, when for the whole "more-oo" action using the CPAN:Moo, so the low-level "things" are handled by the Moo itself.
  • the @_newParameters "meta-protocol" could help for adapting the current non-hash/hashref based constructors, (by defining the parameters order) - but, would be nice to know how do we will implement the type checking in the classes (probably by using CPAN:Type::Tiny) and how we will use Moo's type-coercions? Wouldn't we need implement own BUILDARGS method for our classes anyway, and thus what is the benefit to have the inherited one?

Please understand, these questions aren't any "concerns" - I just asking, to learn new things and trying to understand the "big picture". smile

-- JozefMojzis - 30 Dec 2015

BUILDARGS is not supposed to be inherited, only overwritten. Here is an excerpt from Moo doc:

You can override this method in your class to handle other types of options passed to the constructor.

As Moo doesn't support complete type checking I left it somewhat aside of the matter. Though there're two ways to implement it where considered useful: isa property of has declaration if it's necessary to typecheck on a permanent basis. Or as a part of BUILD initialization – preferably for ro properties which are to be set only while an object is being initialized.

In either case, the @_newParameters is a temporary solution for the transitional period of converting all new() calls from positional to named parameters.

What you're totally right about – there is no error check upon converting parameters into a hash ref. But this partially due to incomplete/missing exception handling, partially because I simply forgot about it. As soon as I'll get more than few changed lines to commit you'll notice a new comment on this issue. wink

Regarding the Foswiki::Object existence: by design it's recommended to utilize a single ancestor whenever a bunch of classes are bound together by a common property – and what could be more common than being a integral part of an application? The benefit of a single ancestor to all Foswiki classes is to implement routine Foswiki-specific staff in a single location. Moo does it's own magic – we need our own.

BTW, Foswiki is overloaded with BEGIN blocks which I consider a bad habit. I'm sure it's a rudiment of pre-OO era and most of it could be harmlessly moved into object initialization.

Besides of initialization we shall take care of destruction too and this is where centralized code might be useful too.

Well, something like this... But I need to say that after reading through your PSGI topic it's very doubtful if this OO work is valuable. It's principles are incompatible with what has to be done for PSGI adaptation. While this job is mostly about keeping a solid application and let it drift smoothly to another design model, PSGI is about breaking the project into cooperative but independent (up to some extent) components. If your proposal will be accepted in its radical rewriting form (I back this up completely) then it is totally pointless to continue this branch.

-- VadimBelman - 31 Dec 2015

Edited comment:

The PSGI topic is only an BrainStorming topic - NOT an feature proposal - and also, isn't directly related to the your "more-ooo" initiative. IMHO, this proposal is one of the most important moves in Foswiki history.

Also, your mentioning of the usage of the BEGIN block (especially how Foswiki using it - for many thing which should not be done in the "compilation phase") is one of the best comments in last 10 years. smile Please, DONT STOP.

-- JozefMojzis - 31 Dec 2015

I've had a chance to read all the way through this topic now, and I'm mighty confused as to what the proposal is. The discussion seems to throw the original proposal at the top of the page into question, and introduces a whole new subject - PSGI - not covered by the proposal. Before commenting in detail I'd like to see what the actual proposal is, please, so some refactoring is in order.

Until then, I have a few comments:
  1. Plugins are fully-fledged objects, within the context of a script instance (i.e. a plugin is instantiated once per CGI request). As I understand it, your proposal is to change that to instantiating a plugin per included topic? I don't see how that helps; inclusion is a recursive process, and an include has to be able to 'see' the context in which it is included. At the moment, that context is handled by Foswiki::pushTopicContext. I don't see how adding a plugin instance per included topic buys you, other than another disjoint mechanism for handling the context. Wouldn't it be better to re-design/re-engineer the existing topicContext object instead?
  2. The current procedural Foswiki::Func API is retained purely and simply because the job of moving to an OO approach puts a huge burden on extension authors. I tried to do this once before, about ten years ago, and gave up because of lack of support/understanding from them. It is critical to get buy - in from these key stakeholders before making any changes (that's what I failed to do)
  3. While I like the idea of moving to Moo/Mouse the discussion above is inconsistent about which is the preferred approach.
  4. Take out / replace blocks functionality is related to the rendering pipeline and is pretty much irrelevant to the object model. I did consider building an internal model of the parsed topic so that blocks were, in effect, already parsed out (c.f. the tables parser) but abandoned that approach because of the structural mutation it would have to undergo as a result of topic syntax changes in the rendering pipeline.
  5. PSGI is related to, but not directly relevant to, this proposal. IMHO discussion of PSGI should be moved to another proposal.

-- Main.CrawfordCurrie - 06 Jan 2016 - 08:45

I agree we need more clarification.

It also seems to me that there is no need for the 1st implementation to refactor the whole project. The 1st implementation needs to be a coherent whole and clean.

That is I see this as establishing a statement of code direction for the project. A full refactor is a lot to chew in one go. I'd rather see extra energy spent on the 1st implementation as that will provide a good model for the rest of the project to adopt.

BTW: DeprecateErrorPm is slightly related to this.

-- JulianLevens - 06 Jan 2016

OO-ification is not only a question of Moo-ification some perl code. If this undertaking will only transfer current code to Moo*, then the actual occasion for improvements has probably been missed.

Rather than musing about coding styles and paradigms we should look into Foswiki's architecture, i.e. which parts are not OO at all such as the user code or the Foswiki::UI layer being too fat or topic objects not being properly represented (tables, lists, ...) etc.

Improving these weaknesses in the internal APIs is where I see most benefit when properly designing a software's architecture while using OO design patterns to isolate layers from each other. While certain parts of todays code base are already pretty much OO-ish though not using Moo*, some are definitely not for legacy reasons. Even lifting those dirty corners on the same level of code quality and design quality would be enough of a goal in the scope of this proposal ... even when not introducing Moo* at all as a cpan dependency.

-- MichaelDaum - 06 Jan 2016

I just tried on my pretty up to date Centos machine with the right extra repository for perl modules enabled to run "yum install perl-Moo".

No result. Moo is not a very standard module (or it has a completely different name as an RPM). Knowing that most corporate installations use one of the big enterprise distros and RedHat (Centos) is a significant number of them - adding a perl CPAN library that I cannot add using the standard RedHat package management system and standard RedHat repositories is a total showstopper in my world. We have to remember that the average admin is not as programmer and may only have little knowledge of the perl world. I have tried myself to run into trouble I could not resolve when I mixed rpm installed perl with CPAN libs. That is a no-no if you want a clean always up to date server running in production where yum updates the system automatically. You will soon end up in conflicts you have no easy way to resolve if you pollute perl with CPAN installed stuff or install RPMs from unofficial repositories. Then the alternative is a parallel perl installation but that does not get updated at all by the package management system and is also a no-no from a security perspective. That is only for bleeding edge programmers and geeks.

If the CPAN Moo library is too exotic to be in the Redhat Extra Packages for Enterprice Linux (EPEL) then I we are cutting off near half our "customer" base with such a move. I have found a perl-Moo RPM in some exotic repositories but not in the official repositories. I am raising concern.

You all know that I am concerned about adding more and more CPAN libs. But I can live with it if it brings benefits to the enduser AND the libraries are in the standard Redhat and Debian repos. In this case I cannot find it in the Redhat repo. And on top of it - the end user could not care less that you guys refactor the code yet another time.

-- KennethLavrsen - 06 Jan 2016

Kenneth, we will add it to CpanContrib as usual. Also, could you please add your findings to above distro table? Thanks.

-- MichaelDaum - 06 Jan 2016

Crawford, unfortunately plugins aren't fully-fledged objects until their methods have $self and $self->isa('Foswiki::Plugin') is true. Instantiating plugin objects per-inclusion simplifies a developer work a lot because he wouldn't have to think about the contexts unless he really needs to. This is the way where we can reduce the amount of bugs in plugins.

Choosing between two storages for per-inclusion data I would give my favor to a plugin object.

For cases where plugin needs to know more about the inclusion context there has to be a way to get information on the including topic. There numerous ways to provide this info starting with the most clear $meta->parent approach. The only big disadvantage of this model is likely memory consumption growth.

The procedural API would remain in place anyway. But the extension developers are to be encouraged to switch over for two reasons:

  • the new OO API has to be better and cleaner.
  • the old procedural API declared deprecated.

Making the first reason true is where I will definitely need a lot of help from the rest of community because my primary task now is to create a basis which will make further improvements easier - I have some support from Julian on this approach. It means, for instance, that it is necessary to make some decisions on internal architecture.

In the battle of Moo vs. Mouse the latter is the looser. It's user base is small, it's not much faster than Moo – if faster at all. And it's not easily integrated with Moose which is very likely to be a part of PSGI ecosystem.

You're right, PSGI is being discussed in QuoVadisPsgiFoswiki, thanks to Jozef great work on this subject. But it is somewhat related to this topic too as both proposals require redesign of Foswiki core. My hope is that better OO model would make PSGI adaptation easier.

-- VadimBelman - 06 Jan 2016

http://kablamo.org/slides-intro-to-moo

-- MichaelDaum - 19 Jan 2016

I tried to write something sensible on Foswiki::Meta, but today is not the day when I'm a good friend of English. Hopefully the idea is understandable. The details are to be considered.

-- VadimBelman - 27 Jan 2016

Michael asked me to clarify my position.

I do not care about the change of OO model for the project. I do not even understand OO. I am not a programmer. I see things from an end user and admin point of view. And if you all spend the next 2 years rewriting the code to do exactly the same as today - my interest in it is none. I get nothing out of it for myself or my users. I maintain a number of plugins. They are very useful and powerful plugins. Some have features we should really consider being part of the default Foswiki. It is the features for the application builders and endusers I care about.

If I now have to learn more OO geek stuff to program plugins I loose on this proposal. I do not win anything. I want thing to be simple and easy. And too much OO abstraction does not make it easy for us none-programmers.

I care about backwards compatibility for my plugins. I do not want to spend 6 months rewriting all my plugins. I do not give damm about how the syntax is for the plugins or if they are OO this or OO that. We are many plugin authors that have invested a lot of effort in our plugins. I do not want that whole good stable framework to break down.

What I fear for the very small project team we are is that a major rewrite will create a totally unstable core for 1-2 years and that it requires 1000s and 1000s of test hours to get it all stable again

I am still concerned about Moo. There is no perl-Moo RPM in any of the official RPM repos. Why not? There are a 1000 CPAN libs in EPEL. Why is Moo not there? What has the Redhat community decided not to include it? Is it stable? is it maintained against security issues? Will we be able to maintain it in our CpanContrib. Do we commit to keep it secure and up to date?

I am not going to even attempt trying to block you guys from doing what you want to do. But I want you to THINK before you just go ahead and rewrite everything.

The main target audience for Foswiki is businesses. The Enterprise market. Enterprise means that stability, security, and low maintenance cost is essential. Foswiki and before that TWiki has become stable over more than 10 years constant improvements. Take good care of this legacy. Improve thing in manageable increments. Like Crawford and Sven has done in the past.

Enterprise means RedHat (and Centos), Ubuntu LTS, SUSE Linux, and maybe Oracle Linux. Enterprise admins have a very basic admin education and cannot program Perl and do not know what CPAN is. They are the people that install Foswiki. They are the people that maintain Foswiki.

They should expect to run Foswiki on an standard setup with all Perl libs kept up to date by the package management system and the daemons that auto install security updates.

If we geek off too much and make it more and more difficult to install and maintain Foswiki (I am not convinced it was smart to split out CpanContrib to be honest) then we loose more and more potential users. I just installed Foswiki 2.1 this week on a blank hard drive. What a PAIN! I have installed all Twikis and Foswikis hundreds of times since Twiki 3.0 (Cairo). And I have never been through such a painful installation. And now you want to add perl libraries I cannot even install with yum.

I do not see what is in it for me, my fellow admins around the corporations, and all our end users.

  • Is Foswiki going to be faster?
  • Easier to install?
  • Easier to maintain?
  • Easier to write plugins for?
  • Or do we now need to learn more objective perl to write simple plugins?
  • Will the Wysiwyg editor stop creating garbage and screwing up applications?
  • Will it be easier to write applications for the end users?
  • Will you pick up some of the smart new feature Twiki added the past years?
  • Will someone fix the plugins that rely on Java? Like JHotDrawPlugin.
  • Or will we just rewrite the whole thing because we can?

I can only say. The reason I have stopped contributing much is that the past 2 years all this project has done is include some plugins we already had and refactored configure to something that is different but not better. There has not been much in it for me and my users.

Sorry. I am a bit frustrated about seeing all good effort going for yet another code refactoring.

-- KennethLavrsen - 10 Feb 2016

Great point Kenneth, thanks for that! Even though most of your statement is no new to me but being put all together gives a bit different perspective.

I would try answer some of the items in your bottom line:

  • Is it gonna be faster? Most likely not in it's first approach. But some features of Moo/Moose might let make it more responsive.
  • Will you need to know OO perl to write new plugins? Well, the old plugin model is gonna remain intact as much as possible. So, it would be possible to learn no more OO than anybody dealing with plugins code know today. Minor differences like $session->{user} replaced with $session->user must not be a big deal. New plugin model? It's not there yet and it's not even clear if it will or if it's gonna be a plugin model at all. This is tabula rasa. But assuming that we eventually develop the new model I can reassure you that basic differences between today's and future code would mainly be in different package header and additional $this parameter for plugin subs which would become methods. That's a matter of 15-20 reading of WhatsNew.
  • Wysiwyg editor? Not my area, I'm even scared a bit of it. But will it be more reliable? Yes if it embraces best Moo/Moose practices of coding. And here comes the point I would like use for my bottom line.

Moo/Moose is about increasing code quality. This is where end user will have real improvement. Reduced amounts of memory leaks, lesser number of bugs related to mistypes, extra data validation leading to higher security – isn't it what enterprise would welcome?

You're looking for new features? Do you know that some of them come at a cost of additional code complication? As code grows its maintenance becomes more and more difficult ans takes more and more time. Not mentioning the growing number of bugs related to what tends to become 'spaghetti code'. This is why redesign of the core structure is something to be done time after time.

Yes, changes like this come at a cost. Mostly at a cost of compatibility of code. Yes, people don like it. They like stability. But listen, I was born in USSR. They kept stability for 70 years. People were kinda happy with it. Now, look how the whole story ended up eventually. smile

-- VadimBelman - 10 Feb 2016

Cannot resist myself asking the community about permission to incorporate CPAN:MooX::LvalueAttribute. Eventhough most of my job has been done using only standard CPAN:Moo functionality but it has some convenience drawbacks. For example:

$this->{someAttribute}++;

has to be translated into:

$this->someAttribute( $this->someAttribute + 1 );

With MooX::LvalueAttribute we can still use the comfortable:

$this->someAttribute++;

Sure enough, it would simplify attribute setting too:
$this->someAttribute = $value;

Yet it may even have some positive performance impact. The following code:

$incTree->value(0);

my $st = time;
foreach (1..100000) {
    $incTree->value($incTree->value + 1);
}
my $et = time;
say "Method call: ", $et - $st, "sec";

$incTree->value = 0;
$st = time;
foreach (1..100000) {
    $incTree->value++;
}
$et = time;
say "Inc call: ", $et - $st, "sec";

has produced the following results:

Method call: 0.269783973693848sec
Inc call: 0.21439003944397sec

In other words – ~20% gain. Hm, that's even better than I expected!

-- VadimBelman - 27 Feb 2016

Nevermind. Depends on Variable::Magic which is XS. And 800-1000% more loss on fetching attribute value! Leave it here just to remember why it is bad. wink

-- VadimBelman - 27 Feb 2016

smile Ad benchmarking: Talking about some "performance gain" when talking about something what could be executed milion times per second, is IMHO pointless.

Especially, when Foswiki uses many things which aren't effective at all. For example; by exchanging the decode_utf8 and and the encode_utf8 from the CPAN:Encode with the alternative from the CPAN:Unicode::UTF8 you could gain 600% speedup in every unicode file read. And this gain HAS really measurable impact per every search. And we still using the Encode. smile

IIRC, one simple bin/view exectutes approx. 900k lines of perl code in total. So, how many times you will use the LvalueAttribute? 10_000 times? e.g. is the real 2 ms gain/loss meaingful in compare with the common 600 ms typical Foswiki's response time? I doubt. 600ms vs 602ms smile So, don't even bother with bechmarking something which has execution rate more as 100k per second. (BTW, This logic should be applied to the Moo/Moose/Mouse and such constructor benchmarking too. Simply, in the real world - here isn't any meaningful difference in the web-application response time - regardless of using Moo or Moose. All the above benchmarking numbers are just numbers - without real impact to the Foswiki response time.)

While I like the LvalueAttribue for some "old school" perl-deveopers could be harder to read a code and will be not evidently clear at the 1st read than the lvalue is an method call anyway.

When talking about "other modules", Vadim, if you have some time, please read and think a bit about the Sandbox.ImportInto. Myself like the DRY principe. Maybe, you too. smile

-- JozefMojzis - 28 Feb 2016

Jozef, shame on me but I forgot to reply on your last comment. Even though I have read your draft the same or the next day. I like it as an idea but we'll need to work on some details.

-- VadimBelman - 08 Mar 2016
 
Topic revision: r53 - 10 Nov 2016, VadimBelman
The copyright of the content on this website is held by the contributing authors, except where stated elsewhere. See Copyright Statement. Creative Commons License    Legal Imprint    Privacy Policy