Professional Documents
Culture Documents
When I started in my first big ABAP based custom development project a couple of years ago, we
started to build up our own object oriented ABAP development framework. We used a common three
layer architecture and built for some extend a generic database layer. This was quite a lot of work, but
it made it easy for the developers to use our database layer. Only a few developers knew exactly how
it works. All the others just used the provided interfaces to access it. We had a central transaction
manager which was responsible for committing the data and a common database lock handling. We
also used common design patterns for the rest of the framework. There I realized how important it is
to have a framework in big projects. New developers could focus on implementing new business
functionality and because we had our common patterns it was quite easy for them to switch from one
business object to another. But I have to admit, that it was quite hard work to implement and maintain
the framework.
With BOPF there is no need to design and implement such a framework on your own. You can just
use BOPF and it brings even more time saving features for the developers. In BOPF you start to
model your business objects in a declarative manner. You decide which entities a business object
has, which actions can be executed on the entity and how this entity is associated to other entities of
the same or entities of other business objects. The BOPF framework itself is implemented in a very
generic way. The framework interprets the defined meta model and provides methods to interact with
the business objects.
After modelling your business objects, including data structures, keys, associations and actions
BOPF supports you generating the needed DDIC objects and also interfaces for constants. As you
also define the lock behavior of each entity, you can already test your business objects using the
BOPF Test UI. The declarative way of modelling the business object has also a big advantage: BOPF
generates a visualization of the defined model where you can easily see how the entities relate to
each other. This visualization can also be used as a base for discussions between the functional
architects and developers. So its easier for the functional architects to understand how the system
works.
Example of BOPF model visualization
Furthermore with FBI (Floor Plan BOPF Integration) there is an easy and fast way to implement UIs
for BOPF objects. This is an integration layer between BOPF and SAP Floor Plan Manager (FPM).
You can define your UI elements based on your business model and the defined UI is from the
beginning able to read and change data.
Of course, it needs some time to be able to work with BOPF and to fully understand it. But this is a
onetime effort. Afterwards you will deliver your custom development projects much faster as today.
And you do not need to care about transaction handling and lock handling, because the framework
takes care about all this. As it is used since years within SAP development, the framework is mature
and stable. If you run into problems with the framework you can rely on your SAP support you have.
If BOPF becomes common knowledge in the ABAP world, is makes it easy for developers to start in a
new project. The developers can focus on the business functions and do not need to do a deep dive
into the underlying development framework and architecture.
With BOPF your team should have object oriented programming skills. Which is still a problem in the
SAP world. But if you have a look on all the new technologies pushed by SAP, there is now future as
an ABAP developer without these skills.
If you are going to EMEA TechEd in Amsterdam, join Oliver Jgles session CD219, on November
6th:
http://sessioncatalog.sapevents.com/go/ab.sessioncatalog/index.cfm?l=57&sf=1504
He will tell his experience with BOPF in a custom development project.
Wouldn't you like to streamline and simplify the development process for your business applications?
Then you should get to know more about BOPF, our infrastructure for developing business objects
that is available for the SAP Business Suite. With the Business Object Processing Framework, you
will save time during the development cycle because you don't have to implement all the technical
details yourself - details such as authorization control, low-level transaction handling, buffer
management, provisioning of consumer API, or business logic orchestration. Using the model-driven
approach in BOPF, you can instead focus your attention more on the actual business requirements
themselves.
What does BOPF stand for?
The Business Object Processing Framework is an ABAP OO-based framework that provides a set of
generic services and functionalities to speed up, standardize, and modularize your development.
BOPF manages the entire life cycle of your business objects and covers all aspects of your business
application development. Instead of expending effort for developing an application infrastructure, the
developer can focus on the individual business logic. Using BOPF, you get the whole application
infrastructure and integration of various components for free. This allows you to rapidly build
applications on a stable and customer-proved infrastructure.
More: Floorplan Manager for Web Dynpro ABAP and Web Dynpro
ABAP on SCN
Process Integration
With BOPF BOs, you can integrate business processes using the Post
Post Processing Processing Workflow.
Workflow
More: Post Processing Framework (PPF) (on SCN )
Infrastructure Component
With ADK you archive not only table records but also business
Archive Development instances. Using BOPF you can select which BO instances have to be
Kit archived and then trigger the archiving process for them.
(ADK)
More: Archive Development Kit on the SAP help portal
According to SAP's BOPF Enhancement Workbench documentation, business objects within the
BOPF are "a representation of a type of uniquely identifiable business entity described by a structural
model and an internal process model." This is to say that BOPF business objects:
In this regard, BOs in the BOPF are not unlike objects developed in other component architectures
(e.g. EJBs in Java, Microsoft COM+, etc.).
From a modeling perspective, BOs are made up of several different types of entities:
Nodes
o Nodes are used to model a BO's data.
o Nodes are arranged hierarchically to model the various dimensions of the BO data. This
hierarchy is organized underneath a single root node (much like XML). From there, the hierarchy can
be nested arbitrarily deep depending upon business requirements.
o There are several different node types supported by the BOPF. However, most of the
time you'll find yourself working with persistent nodes (e.g. nodes which are backed by the database).
It is also possible to define transient nodes whose contents are loaded on demand at runtime. These
types of nodes can come in handy whenever we want to bridge some alternative persistence model
(e.g. data obtained via service calls).
o Each node consists of one or more attributes which describe the type of data stored
within the node:
Attributes come in two distinct varieties: persistent attributes and transient
attributes. Persistent attributes represent those attributes that will be persisted whenever the BO is
saved. Transient attributes are volatile attributes which are loaded on demand.
A node's attributes are defined in terms of structure definitions from the ABAP
Dictionary.
o At runtime, a BO node is like a container which may have zero, one, or many rows. If
you're familiar with the concept of controller contexts with the Web Dynpro programming model, then
this concept should feel familiar to you. If not, don't worry; we'll demonstrate how this works whenever
we look at the BOPF API.
Actions
o Actions define the services (or behavior) of a BO.
o Actions are assigned to individual nodes within a BO.
o The functionality provided by an action is (usually) defined in terms of an ABAP Objects
class that implements the /BOBF/IF_FRW_ACTION interface.
o To some extent, it is appropriate to think of actions as being similar to the methods of an
ABAP Objects class.
Associations
o Though BOs are designed to be self-contained, autonomous entities, they do not have
to exist in isolation. With associations, we can define a direct and unidirectional relationship from one
BO to another.
o For example, in just a moment, we'll take a look at a sample BO called
/BOBF/DEMO_SALES_ORDER which is used to model sales orders. Here, we'll see how the product
assignments for sales order items is defined in terms of an association with a product BO called
/BOBF/DEMO_PRODUCT. This composition technique makes it possible to not only leverage the product
BOs data model, but also its behaviors, etc.
o Associations allow us to integrate BOs together in complex assemblies la Legos.
Determinations
o According to the aforementioned BOPF enhancement guide, a determination "is an
element assigned to a business object node that describes internal changing business logic on the
business object".
o In some respects, determinations are analogous to database triggers. In other words,
they are functions that are triggered whenever certain triggering conditions are fulfilled. These
conditions are described in terms of a series of patterns:
"Derive dependent data immediately after modification"
This pattern allows us to react to changes made to a given BO node. For
example, we might use this event to go clean up some related data.
"Derive dependent data before saving"
This pattern allows us to hang some custom logic on a given BO node
before it is saved. This could be as simple as using a number range object to assign an ID value to a
node attribute or as complex as triggering an interface.
"Fill transient attributes of persistent nodes"
This pattern is often used in conjunction with UI development. Here, we
might want to load labels and descriptive texts into a series of transient attributes to be displayed on
the screen.
Note: This determination can be bypassed via the API if the lookup
process introduces unnecessary overhead.
"Derive instances of transient nodes"
This pattern allows us to load transient nodes into memory on demand.
Here, for example, we might lookup real-time status data from a Web service and load it into the
attributes of a transient node from downstream consumption.
o Determination patterns are described in detail within the aforementioned BOPF
enhancement guide.
o The logic within a determination is defined via an ABAP Objects class that implements
the /BOBF/IF_FRW_DETERMINATION interface.
Validations
o According to the BOPF enhancement guide, validations are "an element of a business
object node that describes some internal checking business logic on the business object".
o Validations come in two distinct forms:
Action Validations
Action validations are used to determine whether or not a particular action
can be executed against a BO node.
Consistency Validations
As the name suggests, consistency validations are used to ensure that a
BO node is consistent. Such validations are called at pre-defined points within the BOPF BO
transaction cycle to ensure that BO nodes are persisted in a consistent state.
o The validation logic is encapsulated within an ABAP Objects class that implements the
/BOBF/IF_FRW_VALIDATION interface.
Queries
o Queries are BO node entities which allow us to search for BOs using various types of
search criteria.
o Queries make it possible for consumers to access BOs without knowing the BO key up
front.
o Queries also integrate quite nicely with search frameworks and the like.
o Queries come in two varieties:
Node Attribute Queries
Node attribute queries are modeled queries whose logic is defined within
the BOPF runtime. These simple queries can be used whenever you simply need to search for BO
nodes by their attributes (e.g. ID = '12345').
Custom Queries
Custom queries allow you define your own query logic by plugging in an
ABAP Objects class that implements the /BOBF/IF_FRW_QUERY interface.
The figure below illustrates how all of these entities fit together within a BO node definition. Here, I've
pulled up a BO called /BOBF/DEMO_SALES_ORDER in Transaction /BOBF/CONF_UI. Here, the BO
metadata is organized into several different panels:
On the top left-hand side of the screen, you can see the BO's node structure. Here, you can
see that the node structure is organized underneath a top-level ROOT node which models sales
order header data. Underneath this node are several child nodes which model sales order items,
customer assignment, and texts. The ITEM node in turn encompasses its own child nodes to model
item-level data.
On the bottom left-hand side of the screen, we can browse through the node collection of a BO
and view the entity assignments of a given node. As you can see in the figure, each node contains
folders which organize assigned actions, validations, and so on.
In the middle of the screen, we can view additional details about a selected node by double-
clicking on a node within the Node Structure panel on the left-hand side of the screen. Here, we can
look at a node's data model, implementation classes, and so on.
We'll have an opportunity to get a little more hands on with these entities in upcoming blog entries.
For now, our focus is on grasping how pieces fit together and where to go to find the information we
need to get started with a BO.
Next Steps
At this point, you should have a decent feel for how BOs are modeled at design time. In my next blog,
we'll shift gears and begin manipulating BOs using the provided BOPF APIs. This will help put all of
these entities into perspective.
Note: The code bundle described above has been enhanced as of 9/18/2013. The code was
reworked to factor out a BOPF utilities class of sorts and also demonstrate how to traverse over to
dependent objects (DOs).
BOPF API Overview
Before we begin coding with the BOPF API, let's first take a look at its basic structure. The UML class
diagram below highlights some of the main classes that make up the BOPF API. At the end of the
day, there are three main objects that we'll be working with to perform most of the operations within
the BOPF:
/BOBF/IF_TRA_TRANSACTION_MGR
o This object reference provides a transaction manager which can be used to manage
transactional changes. Such transactions could contain a single step (e.g. update node X) or be
strung out across multiple steps (add a node, call an action, and so on).
/BOBF/IF_TRA_SERVICE_MANAGER
o The service manager object reference provides us with the methods we need to lookup
BO nodes, update BO nodes, trigger validations, perform actions, and so on.
/BOBF/IF_FRW_CONFIGURATION
o This object reference provides us with metadata for a particular BO. We'll explore the
utility of having access to this metadata coming up shortly.
In the upcoming sections, I'll show you how these various API classes collaborate in typical BOPF
use cases. Along the way, we'll encounter other useful classes that can be used to perform specific
tasks. You can find a complete class listing within package /BOBF/MAIN.
Note: As you'll soon see, the BOPF API is extremely generic in nature. While this provides
tremendous flexibility, it also adds a certain amount of tedium to common tasks. Thus, in many
applications, you may find that SAP has elected to wrap the API up in another API that is more
convenient to work with. For example, in the SAP EHSM solution, SAP defines an "Easy Node
Access" API which simplfies the way that developers traverse BO nodes, perform updates, and so on.
Getting Started
To drive the application functionality, we'll create a local test driver class called LCL_DEMO. As you can
see in the code excerpt below, this test driver class loads the core BOPF API objects at setup
whenever the CONSTRUCTOR method is invoked. Here, the factory classes illustrated in the UML class
diagram shown in the previous section are used to load the various object references.
METHODS:
constructor RAISING /bobf/cx_frw.
ENDCLASS.
For the most part, this should seem fairly straightforward. However, you might be wondering where I
came up with the IV_BO_KEY parameter in the GET_SERVICE_MANAGER() and GET_CONFIGURATION()
factory method calls. This value is provided to us via the BO's constants interface
(/BOBF/IF_DEMO_CUSTOMER_C in this case) which can be found within the BO configuration in
Transaction /BOBF/CONF_UI (see below). This auto-generated constants interface provides us with
a convenient mechanism for addressing a BO's key, its defined nodes, associations, queries, and so
on. We'll end up using this interface quite a bit during the course of our development.
Creating New Customers
Once we have the basic framework in place, we are ready to commence with the development of the
various CRUD operations that our application will support. To get things started, we'll take a look at
the creation of a new customer instance. For the most part, this involves little more than a call to the
MODIFY() method of the /BOBF/IF_TRA_SERVICE_MANAGER object reference. Of course, as you can see
in the code excerpt below, there is a fair amount of setup that we must do before we can call this
method.
IF lv_rejected EQ abap_true.
lo_driver->display_messages( lo_message ).
RETURN.
ENDIF.
As you can see in the code excerpt above, the majority of the code is devoted to building a table
which is passed in the IT_MODIFICATION parameter of the MODIFY() method. Here, a separate record is
created for each node row that is being modified (or inserted in this case). This record contains
information such as the node object key (NODE), the edit mode (CHANGE_MODE), the row key (KEY)
which is an auto-generated GUID, association/parent key information, and of course, the actual data
(DATA). If you've ever worked with ALE IDocs, then this will probably feel vaguely familiar.
Looking more closely at the population of the node row data, you can see that we're working with data
references which are created dynamically using the CREATE DATA statement. This indirection is
necessary since the BOPF API is generic in nature. You can find the structure definitions for each
node by double-clicking on the node in Transaction /BOBF/CONF_UI and looking at the Combined
Structure field (see below).
Once the modification table is filled out, we can call the MODIFY() method to insert the record(s).
Assuming all is successful, we can then commit the transaction by calling the SAVE() method on the
/BOBF/IF_TRA_TRANSACTION_MANAGER instance. Should any errors occur, we can display the error
messages using methods of the /BOBF/IF_FRW_MESSAGE object reference which is returned from both
methods. This is evidenced by the simple utility method DISPLAY_MESSAGES() shown below. That's
pretty much all there is to it.
As we learned in the previous blog post, most BOs come with one or more queries which allow us to
search for BOs according to various node criteria. In the case of the /BOBF/DEMO_CUSTOMER business
object, we want to use the SELECT_BY_ATTRIBUTES query attached to the ROOT node (see below).
This allows us to lookup customers by their ID value.
The code excerpt below shows how we defined our query in a method called
GET_CUSTOMER_FOR_ID(). As you can see, the query is executed by calling the aptly named QUERY()
method of the /BOBF/IF_TRA_SERVICE_MANAGER instance. The query parameters are provided in the
form of an internal table of type /BOBF/T_FRW_QUERY_SELPARAM. This table type has a similar look
and feel to a range table or SELECT-OPTION. The results of the query are returned in a table of type
/BOBF/T_FRW_KEY. This table contains the keys of the node rows that matched the query parameters.
In our sample case, there should be only one match, so we simply return the first key in the list.
As you can see in the code excerpt below, we begin our search by accessing the customer ROOT
node using the RETRIEVE() method. Here, the heavy lifting is performed by the GET_NODE_ROW() and
GET_NODE_TABLE() helper methods. Looking at the implementation of the GET_NODE_TABLE() method,
you can see how we're using the /BOBF/IF_FRW_CONFIGURATION object reference to lookup the node's
metadata. This metadata provides us with the information we need to construct an internal table to
house the results returned from the RETRIEVE() method. The GET_NODE_ROW() method then
dynamically retrieves the record located at the index defined by the IV_INDEX parameter.
Within the DISPLAY_CUSTOMER() method, we get our hands on the results by performing a cast on the
returned structure reference. From here, we can access the row attributes as per usual.
After the root node has been retrieved, we can traverse to the child nodes of the
/BOBF/DEMO_CUSTOMER object using the RETRIEVE_BY_ASSOCIATION() method. Here, the process is
basically the same. The primary difference is in the way we lookup the association metadata which is
used to build the call to RETRIEVE_BY_ASSOCIATION(). Once again, we perform a cast on the returned
structure reference and display the sub-node attributes from there.
PRIVATE SECTION.
METHODS:
get_node_table IMPORTING iv_key TYPE /bobf/conf_key
iv_node_key TYPE /bobf/obm_node_key
iv_edit_mode TYPE /bobf/conf_edit_mode
DEFAULT /bobf/if_conf_c=>sc_edit_read_only
RETURNING VALUE(rr_data) TYPE REF TO data
RAISING /bobf/cx_frw,
iv_key = lv_customer_key
iv_node_key = /bobf/if_demo_customer_c=>sc_node-root
iv_assoc_key = /bobf/if_demo_customer_c=>sc_association-root-root_text
iv_index = 1 ).
WRITE: / 'Short Text:', lr_s_text->text.
CATCH /bobf/cx_frw INTO lx_bopf_ex.
lv_err_msg = lx_bopf_ex->get_text( ).
WRITE: / lv_err_msg.
ENDTRY.
ENDMETHOD. " METHOD display_customer
METHOD get_node_table.
"Method-Local Data Declarations:
DATA lt_key TYPE /bobf/t_frw_key.
DATA ls_node_conf TYPE /bobf/s_confro_node.
DATA lo_change TYPE REF TO /bobf/if_tra_change.
DATA lo_message TYPE REF TO /bobf/if_frw_message.
METHOD get_node_row.
"Method-Local Data Declarations:
DATA lr_t_data TYPE REF TO data.
METHOD get_node_table_by_assoc.
"Method-Local Data Declarations:
DATA lt_key TYPE /bobf/t_frw_key.
DATA ls_node_conf TYPE /bobf/s_confro_node.
DATA ls_association TYPE /bobf/s_confro_assoc.
DATA lo_change TYPE REF TO /bobf/if_tra_change.
DATA lo_message TYPE REF TO /bobf/if_frw_message.
METHOD get_node_row_by_assoc.
"Method-Local Data Declarations:
DATA lr_t_data TYPE REF TO data.
The code excerpt below shows how the changes are carried out. Here, we're simply updating the
address string on the customer. Of course, we could have performed wholesale changes if we had
wanted to.
FIELD-SYMBOLS:
<ls_mod> LIKE LINE OF lt_mod.
IF lv_rejected EQ abap_true.
lo_driver->display_messages( lo_message ).
RETURN.
ENDIF.
Next Steps
I often find that the best way to learn a technology framework is to see how it plays out via code. At
this level, we can clearly visualize the relationships between the various entities and see how they
perform at runtime. Hopefully after reading this post, you should have a better understanding of how
all the BOPF pieces fit together. In my next blog post, we'll expand upon what we've learned and
consider some more advanced features of the BOPF API.
Whatever the business rules might be, the point is that we want to ensure that a BO is consistent
throughout each checkpoint in its object lifecycle. As we learned in part 2 of this blog series, the
BOPF allows us to define these consistency checks in the form of validations. For example, in the
screenshot below, you can see how SAP has created a validation called CHECK_ROOT for the ROOT
node of the /BOBF/DEMO_SALES_ORDER demo BO. This validation is used to perform a consistency
check on the sales order header-level fields to make sure that they are valid before an update is
committed to the database.
One of the nice things about validations like CHECK_ROOT is that they are automatically called by the
BOPF framework at specific points within the transaction lifecycle. However, sometimes we might
want to trigger such validations interactively. For example, when building a UI on top of a BO, we
might want to provide a check function which validates user input before they save their changes.
This is demonstrated in the /BOBF/DEMO_SALES_ORDER Web Dynpro ABAP application shown below.
From a code perspective, the heavy lifting for the check operation is driven by the
CHECK_CONSISTENCY() method of the /BOBF/IF_TRA_SERVICE_MANAGER interface as shown in the code
excerpt below. Here, we simply provide the service manager with the target node key and the BO
instance key and the framework will take care of calling the various validations on our behalf. We can
then check the results of the validation by looking at the /BOBF/IF_FRW_MESSAGE instance which was
introduced in the previous blog.
TRY.
APPEND INITIAL LINE TO lt_key ASSIGNING <ls_key>.
<ls_key>-key = iv_key. "<== Sales Order BO Key
I'll show you how to implement validations within a BO in an upcoming blog entry.
Triggering Actions
The behaviors of a business object within the BOPF are defined as actions. From a conceptual point-
of-view, actions are analogous to methods/functions in the object-oriented paradigm. The following
code excerpt demonstrates how actions are called using the BOPF API. Here, we're calling the
DELIVER action defined in the ROOT node of the /BOBF/DEMO_SALES_ORDER demo BO. As you can
see, the code reads like a dynamic function/method call since we have generically pass the name of
the action along with its parameters to the DO_ACTION() method of the
/BOBF/IF_TRA_SERVICE_MANAGER interface. Other than that, it's pretty much business as usual.
TRY.
"Set the BO instance key:
APPEND INITIAL LINE TO lt_key ASSIGNING <ls_key>.
<ls_key>-key = iv_key. "<== Sales Order BO Key
...
CATCH /bobf/cx_frw INTO lx_frw.
...
ENDTRY.
We can verify if a BOPF action call was successful by querying the EO_MESSAGE object reference
and/or the ET_FAILED_KEY table parameters returned by the DO_ACTION() method. Refer back to my
previous blog for an example of the former technique. As always, remember to commit the
transaction using the
SAVE() method defined by the /BOBF/IF_TRA_TRANSACTION_MGR interface.
Action Validations
In part two of this blog series, I mentioned that there are technically two different types of validations:
the consistency check validations discussed earlier in this blog and action validations which are used
to determine whether or not an action can be executed at runtime. Since the BOPF framework
invokes action validations automatically whenever actions are invoked, it is rare that you will have a
need to invoke them directly. However, the /BOBF/IF_TRA_SERVICE_MANAGER interface does provide
the DO_ACTION() method if you wish to do so (see the code excerpt below).
TRY.
"Set the BO instance key:
APPEND INITIAL LINE TO lt_key ASSIGNING <ls_key>.
<ls_key>-key = iv_key. "<== Sales Order BO Key
...
CATCH /bobf/cx_frw INTO lx_frw.
...
ENDTRY.
Transaction Management
Another element of the BOPF API that we have glossed over up to now is the transaction manager
interface /BOBF/IF_TRA_TRANSACTION_MGR. This interface provides us with a simplified access point
into a highly sophisticated transaction management framework. While the details of this framework
are beyond the scope of this blog series, suffice it to say that the BOPF transaction manager does
more here than simply provide basic object-relational persistence. It also handles caching,
transactional locking, and more. You can see how some of these features are implemented by
looking at the Transactional Behavior settings of a business object definition in Transaction
/BOBF/CONF_UI (see below).
So far, we have seen a bit of the /BOBF/IF_TRA_TRANSACTION_MGR on display whenever we looked at
how to insert/update records. Here, as you may recall, we used SAVE() method of the
/BOBF/IF_TRA_TRANSACTION_MGR interface to save these records. In many respects, the SAVE()
method is analogous to the COMMIT WORK statement in ABAP in that it commits the transactional
changes to the database. Here, as is the case with the COMMIT WORK statement, we could be
committing multiple updates as one logical unit of work (LUW) - e.g. an insert followed by a series of
updates.
Once a transaction is committed, we can reset the transaction manager by calling the CLEANUP()
method. Or, alternatively, we can also use this method to abandon an in-flight transaction once an
error condition has been detected. In the latter case, this is analogous to using the ROLLBACK WORK
statement in ABAP to rollback a transaction.
During the course of a transaction, the BOPF transaction manager tracks the changes that are made
to individual business objects internally so that it can determine what needs to be committed and/or
rolled back. If desired, we can get a peek of the queued up changes by calling the
GET_TRANSACTIONAL_CHANGES() method of the /BOBF/IF_TRA_TRANSACTION_MGR interface. This
method will return an object reference of type /BOBF/IF_TRA_CHANGE that can be used to query the
change list, modify it in certain cases, and so on.
Next Steps
At this point, we have hit on most of the high points when it comes to interacting with the BOPF API
from a client perspective. In my next blog, we'll shift gears and begin looking at ways of enhancing
BOs using the BOPF toolset.
What to Enhance?
Before we dive into the exploration of specific enhancement techniques, let's first take a look at the
kinds of entities we're allowed to enhance in a business object. Aside from implicit enhancements
applied to implementation classes using the Enhancement Framework, the types of entities that we
can enhance within a business object are as follows:
Custom Attributes
o For a given node, we might want to define a handful of additional custom
attributes.These attributes could be persistent (i.e., they get appended to the target database table
which contains the node data) or transient in nature.
New Sub-Nodes
o In some cases, we may need to do more than simply define a few new attributes on an
existing node. Using the relational data model as our guide, we may determine that a new sub-node
is needed to properly model some new dimension of data (e.g. 1-to-many relations, etc.). Depending
on the requirement, the sub-node(s) might be persistent or transient in nature.
Determinations
o If we add new custom attributes to a given node, it stands to reason that we might also
want to create a custom determination to manage these attributes.
o Or, we might have a standalone requirement which calls for some sort of "trigger" to be
fired whenever a specific event occurs (e.g. fire an event to spawn a workflow, etc.).
Consistency Validations
o If we are enhancing the data model of a business object, we might want to define a
consistency validation to ensure that the new data points remain consistent.
o A custom validation might also be used to graft in a new set of business rules or a
custom security model.
Actions
o If we have certain operations which need to be performed on a business object, we
would prefer to encapsulate those operations as an action on the business object as opposed to
some standalone function module or class.
Queries
o In some cases, the set of defined queries for a business object might not be sufficient
for our needs. In these situations, we might want to define custom queries to encapsulate the
selection logic so that we can use the generic query services of the BOPF API as opposed to some
custom selection method.
You can find a detailed treatment of supported enhancement options in the BOPF Enhancement
Workbench Help documentation which is provided as a separate download in SAP Note #1457235.
This document provides a wealth of information concerning the use of the BOPF Enhancement
Workbench, enhancement strategies, and even the BOPF framework in general. Given the amount of
detail provided there, I won't attempt to re-invent the wheel in this blog post. Instead, I'll simply hit on
the high points and leave the nitty-gritty details to the help documentation.
Once the enhancement is created, you will be able to edit your enhancement object in the workbench
perspective of the BOPF Enhancement Workbench shown below. As you can see, it has a similar
look-and-feel to that of the normal BO browser tool (Transaction /BOBF/CONF_UI). From here, we
can begin adding custom entities by right-clicking on the target node and selecting from the available
menu options. We'll see how this works in the upcoming sections.
One final item I would draw your attention to with enhancement objects is the assigned constants
interface (highlighted above). This constants interface can be used to access the enhancement object
entities in the same way that the super BO's constants interface is used for BOPF API calls, etc.
For more complex data requirements, we typically need to define sub-nodes. This can be achieved by
right-clicking on the parent node and selecting the Create Subnode menu option. This kicks off a
wizard process in which you can select the sub-node's name, its persistent and/or transient
structures, and the rest of the auto-generated dictionary types which go along with a node definition
(e.g. combined structure/table type, database table, etc.). Most of this is pretty standard stuff, but I
would draw your attention to the step which creates the persistent and/or transient structures. Note
that these structures must exist in the database before you move on from the Attributes step in the
wizard process. And, in the case of the persistent structure, you must include the /BOBF/S_ADMIN
structure as the first component.
After the custom sub-node is created, you can fill out its attributes by adding components to the
persistent/transient structures defined by the sub-node. If the sub-node is a persistent node, then we
can create, modify, and retrieve node instances using the BOPF API as per usual. However, in the
case of transient nodes, we need determinations to pre-fetch the data for us. We'll see how to define
such determinations next.
Defining Determinations
According to the help documentation, determinations encapsulate internal changing business logic on
a business object. Unlike the logic encapsulated in actions which can be triggered at any time, the
business logic contained within determinations is triggered as specific times within the BO life cycle
(e.g. right before a node is saved, etc.). So, in a way, it is appropriate to think of determinations as
being a little bit like user exits/BAdIs/enhancement spots in that they provide a place to hang custom
logic at particular points within the process flow.
Once we determine (no pun intended) that we want to create a determination for a given node, we
can do so by simply right-clicking on that node and selecting the Create Determination menu option.
This will spawn a wizard which guides us through the process. Here, there are two main properties
that we must account for:
1. Implementing Class:
o We must create or assign an ABAP Objects class that implements the
/BOBF/IF_FRW_DETERMINATION interface.
2. Determination Pattern:
o This property defines the event which triggers the determination. As you can see below,
the set of available patterns will vary depending on the type of node you're enhancing, its location in
the node hierarchy, and so on.
o Once a pattern is selected, you may be presented with additional options for refining
when an event is triggered. For example, if we select the pattern "Derive dependent data immediately
after modification", we will have the opportunity to specify if the dependent data should be
created/modified after any modification, only when the node is created the first time, etc.
Because determinations can be used for a lot of different things, they can be implemented in a lot of
different ways. Here, it is very important that you pay close attention to selecting the right pattern for
the right job. The aforementioned help documentation provides a good set of guidelines to assist
here. Other valuable resources include the interface documentation for the
/BOBF/IF_FRW_DETERMINATION interface in the Class Builder tool and SAP standard-delivered
determinations implementations available in the system you're working on.
1. Implementing Class:
o Here, we must create/assign an ABAP Objects class which implements the
/BOBF/IF_FRW_VALIDATION interface.
2. Request Nodes:
o This property allows us to specify which node operations should force a validation to
occur (e.g. during creates, updates, etc.)
3. Impact:
o With this property, we can specify the behavior of the BOPF framework in cases where
the validation fails. For example, should we simply return an error message, prevent the requested
operation from proceeding, or both?
We can create a brand new action definition for a given node (standard or custom).
We can enhance existing actions with pre/post action enhancements.
The first case is pretty straightforward. Basically, we simply follow along with the wizard process up to
the point that we reach the Settings step shown below. Here, we must define three main properties
for the action:
Implementing Class:
o This property is used to specify the ABAP Objects class which encapsulates the action
logic. The class must implement the /BOBF/IF_FRW_ACTION interface.
Action Cardinality:
o The action cardinality property defines the scope of the action. This is somewhat
analogous to the way we have the option of defining class methods or instance methods within a
regular ABAP Objects class. In this case however, we also have the third option of defining a sort of
"mass-processing" action which works on multiple node instances at once.
Parameter Structure:
o If we wish to pass parameters to the action, we can plug in an ABAP Dictionary
structure here to encapsulate the parameters.
Once the action is created, we simply need to plug in the relevant logic in the defined implementation
class. You can find implementation details for this in the interface documentation and/or sample
action classes in the system.
In order to create a pre/post action enhancement, the target action definition in the super BO must
have its "Action Can Be Enhanced" flag set (see below). Assuming that the flag is set, then we can
proceed through the corresponding wizard process in much the same way we would if we were
creating a custom action from scratch. Indeed, as is the case with regular actions, the implementation
class(es) for pre/post action enhancements must implement the /BOBF/IF_FRW_ACTION interface.
Before you go to implement a pre/post action enhancement, I would definitely recommend that you
read through the help documentation so that you understand what you can and cannot do within an
action enhancement. Most of the rules are intuitive, but you can definitely get into trouble if you abuse
these enhancements by using them for things they weren't designed for.
Next Steps
Hopefully by now you have a general feel for how BOs are enhanced and the basic steps required to
achieve these enhancements. As is the case with most programming-related subjects, the best way
to really drive these concepts home is to look at live examples and experiment for yourself. I would
also highly recommend that you read through the aforementioned help documentation as it devotes
quite a bit of time to understanding when and where to apply specific enhancement techniques.
In my next and final blog post in this series, I'll demonstrate another useful tool within the BOPF
toolset: the BO test tool. This tool can be used to experiment with BOs and perform ad hoc unit tests,
etc.
Editing BO Instances
Once the BO metadata is loaded, we have two choices for maintenance:
1. To create a new BO instance, we can double-click on the root node contained in the "Metadata
and Instances" tree on the left-hand side of the editor screen and then select the Create button in the
toolbar (see below). This will cause a new record to be created and loaded into an editable ALV grid.
From here, we can begin filling in node attributes, creating sub-node instances, and so on. Here, I
would draw your attention to the Messages panel located in the bottom left-hand corner of the editor.
These messages can be used to help you fill in the right data.
2. If the BO instance that we want to maintain/display exists already, then we can load it into
context using the Load Instances button menu. As you can see in the screenshot below, this menu
affords us with several different alternatives for loading node instances: via a BOPF node query, by
the node instance key, or by an alternative key (e.g. ID). Regardless of the menu path that we take,
the system will attempt to find the target node instance(s) and then load them into the editor window.
From here, we can select individual node instances by double-clicking on them in the Metadata and
Instances tree located on the left-hand side of the screen.
To edit node instances, we can select the node instance record in the editor on the right-hand side of
the screen and choose the appropriate option from the Edit button menu (see below). Then, we can
edit attributes for a node instance using the provided input fields. Alternatively, we also have the
option of deleting a node instance (or indeed an entire BO instance in the case of a root node
instance) by clicking on the Delete Node Instances button.
Regardless of whether or not we're creating a new BO instance or editing an existing one, the entire
scope of our changes is tracked via a BOPF transaction like the one we would create if we were
doing all this by hand using the BOPF API. At any point along the way, we can choose to commit the
changes using the Save Transaction button, or revert the changes using the Cleanup Transaction
button. Then, we can start the process over by selecting another BO instance or editing the existing
one in place. All in all, it's kind of like table maintenance on steroids. But wait, there's more!
intuitive.
UI Integration and the FBI Framework
Since the focus of this blog series has been primarily on introducing the BOPF framework, I have
purposefully avoided digressing into specific applications of the BOPF (e.g. in Transportation
Management or EHSM) since these products add additional layers on top of the BOPF that can sort
of cloud the picture a bit if you don't understand core principles of the BOPF itself. However, before I
bring this blog series to a close, I would be remiss if I didn't point out one important (and relatively
generic) framework built on top of the BOPF: the Floorplan Manager BOPF Integration (FBI)
framework. As the name suggests, this framework links BOs from the BOPF with Web UIs based on
the Floorplan Manager (FPM) framework and Web Dynpro ABAP (WDA).
If you're developing Web UIs on top of BOs from the BOPF, then the FBI is definitely something to
take a look at. Essentially, the FBI exploits the genericity of the BOPF API and the accessibility of BO
model data to enable the rapid development of Generic User Interface Building Blocks (GUIBBs)
based on BO nodes. Here, for example, we could create a form GUIBB that allows users to populate
the data for a BO node using a simple input form. In many applications, this can be achieved without
having to write a single line of code. While a detailed discussion of the FBI is beyond the scope of this
blog series, a quick Google search will lead you to some pretty decent resource materials. If you're
new to FPM, I would also offer a shameless plug for my book Web Dynpro ABAP: The
Comprehensive Guide (SAP PRESS, 2012).
Conclusion
When I first started working with the BOPF almost a year ago, I was surprised at how little
documentation there was to get started with. So, what you've seen in this series is the result of a lot of
trial-and-error and lessons learned by debugging past application-specific frameworks into the heart
of the BOPF itself. If you're just getting started with the BOPF, then I hope that you'll find this series
useful to get you up and running. In the coming months and years, I think many more learning
resources will materialize to supplement what I've offered here. Indeed, the number of new dimension
applications based on the BOPF appears to be growing by the day...
One complaint I sometimes hear from other developers is that the BOPF API is cumbersome to work
with. On this point, I can agree to a point. However, I would argue that such complexities can be
abstracted away pretty easily with a wrapper class or two and some good old fashioned RTTI code.
Other than that, once you get used to the BOPF, I think you'll find that you like it. And this is coming
from a developer who has had many bad experiences with BO frameworks (both in and outside
SAP...curse you EJBs!!!). All in all though, I have found the BOPF to be very comprehensive and
flexible. For me, one of the feel tests I normally conduct to gauge the effectiveness of a framework is
to ask myself how often the framework gets in my way: either because it's too intractible, limited in
functionality or whatever. I have yet to run into any such occurrences with the BOPF. It does a good
job of providing default behaviors/functionality while at the same time affording you the opportunity to
tweak just about everything. For example, if I want to build my own caching mechanism, I can do so
by plugging in my own subclass. If I want to pull data from a HANA appliance in real time, I can do so
in a determination. You get the idea. It's all there, so just poke around a bit and I think you'll find what
you need.